Asset-driven Threat Modeling for AI-based Systems
Abstract
Threat modeling is a popular method to securely develop systems by achieving awareness of potential areas of future damage caused by adversaries. The benefit of threat modeling lies in its ability to indicate areas of concern, paving the way to consider mitigation during the design stage. However, threat modeling for systems relying on Artificial Intelligence is still not well explored. While conventional threat modeling methods and tools did not address AI-related threats, research on this amalgamation still lacks solutions capable of guiding and automating the process, as well as providing evidence that the methods hold up in practice. To evaluate that the work at hand is able to guide and automatically identify AI-related threats during the architecture definition stage, several experts were tasked to create a threat model of an AI system designed in the healthcare domain. The usability of the solution was well-perceived, and the results indicate that it is effective for threat identification.
Index Terms:
AI Security, Cybersecurity, Threat ModelingI Introduction
Artificial Intelligence (AI) is considered a disruptive technology that is being integrated into a myriad of different domains, ranging from healthcare applications to embedded implementations of AI [1], which now serve as a key contributor to other technologies such as 6G [2]. Even more impressive than the range of domains that see interest in technologies that can be summarized under the term ”AI” is the speed at which they are adopted. For example, ChatGPT has attracted 100 million active users per month in just a few weeks [3].
The fact that AI technologies are now readily available to individuals, corporations, and national actors has also given rise to concern. For example, [4] have analyzed the implications of Large Language Models (LLMs) in the context of the Swiss Cybersecurity landscape, summarizing threats such as spear phishing, vulnerable code injections, and remote code execution. Furthermore, researchers have demonstrated that these attacks can be executed in a realistic setting [5]. Aside from LLMs, extensive research has demonstrated weaknesses in related AI technologies, including Machine Learning [6], Federated Learning [7], and Computer Vision [8].
It appears that not only the aforementioned hopes but also the concerns of these technologies are rightly part of current discussions. However, it is vital to consider that the adoption is ongoing – organizations are actively integrating these technologies into their products and services. This raises the question of how organizations should approach these security concerns, especially given the scarcity of cybersecurity talent and the speed at which AI services are integrated.
One approach that has demonstrated value in the conventional application security field is threat modeling, which is used for secure software development, risk assessment, or to foster security awareness. Being part of secure development processes (e.g., SSDLC, SAMM), threat modeling is valid outside a dedicated cybersecurity context [9], which is critical considering that AI system development may be driven by data scientists and software engineers leveraging services.
While threat modeling can serve as a key step to identify and mitigate (by means of prevention or response) cybersecurity issues at design time [10], creating suitable threat models is still a challenge for software engineers and data scientists. Multiple reasons can challenge the creation of threat models for AI systems. In research, wide attention is given to investigating threats and vulnerabilities from a research perspective without proposing practical cybersecurity approaches. Furthermore, existing threat modeling methodologies and tools are conceptualized for conventional software systems and, hence, do not directly support AI threat identification. Recent research [11, 12] addressed how to apply threat modeling for AI. However, this limited body of research has not shown how to support or automate the process, especially during the design phase. Moreover, these approaches were not deployed in scenarios involving real users and design problems.
Thus, the key contribution of this paper is an asset-driven threat modeling approach and a guiding tool for said methodology. The methodology comprises five steps that are aligned with the design procedures of AI-based systems. To guide and automate threat identification, existing literature is transformed into a queryable ontology. A stencil library is provided to connect the semantics of the ontology. This allows for automated asset and threat identification when AI-based system architectures are modeled. Finally, the presented work details experimental results demonstrating that (i) the tool can reproduce a threat model created by cybersecurity experts. For this, (ii) different types of users were involved in experiments to understand whether non-security personnel can reproduce these results, followed by (iii) a qualitative investigation of the tool’s perceived usability. Overall, the tool can guide and automate threat modeling for AI, effectively reproducing threat models with acceptable usability when used by data scientists.
Work | Contribution | Evaluation | Domain |
---|---|---|---|
[12] [2020] | Method, Survey | Illustrative Case Study | Security Requirements Engineering |
[13] [2021] | Degradation Quantification Method | Demonstration | Adversarial Machine Learning |
[14] [2022] | Threat Model | Demonstration | Cellular Networks |
[15] [2022] | Methodology | Illustrative Case Study | Threat Modeling of AI Systems |
ThreatFinderAI [2024] | Open-source Tool, Methodology | Field Study | Threat Modeling of AI Systems |
II Background and Related Work
Literature related to this work can be grouped into three segments: (i) research identifying adversarial attacks on AI, (ii) established threat modeling tools and methods, and (iii) a small body of literature looking into the combination of the former two. Due to the lack of research on AI threat modeling, painting a realistic picture of the problem domain requires a summary of research in all three areas.
A recent survey organizes cyber attacks on AI systems according to the Machine Learning (ML) pipeline. During data collection and preprocessing, data poisoning attacks influence the resulting model by injecting samples. These may be falsified in the data source or the database used for collection [16, 17]. The goal of the attack may vary [18, 19, 17] and multiple poisoning strategies (e.g., random or targeted data manipulation and injection) exist [20]. Spanning the feature selection and model training stages, several strategies are identified that can replace the model with a poisoned one [17, 21]. During the inference stage, attacks achieving model inversion, inference, and failure are described. Model inversion aims to recover information on the training samples [17], while extraction attacks attempt to obtain or reconstruct the model based on limited access [17, 22].
Looking into threat modeling tools and methods, none of the popular tools such as the Microsoft Threat Modeling Tool, CAIRIS, Threatspec, SDElements, or Tutamen focus on threat modeling for AI systems [23, 24, 25, 26]. Although some, such as CAIRIS, present the ability to create custom threat libraries, no taxonomies or support for an AI-related method are present. Furthermore, many of the tools are non-free or closed-source software. It is unclear how well they are understood outside the security domain. Here, it appears that there is a tradeoff between flexibility and guidance. For example, diagrams.net [27] is popular in threat modeling due to its flexibility and widespread familiarity [10]. In such an example, STRIDE, a mnemonic-based brainstorming method, could be applied to the AI domain at the expense of requiring users to survey and relate threats to the system manually.
In this context, related activities from the industry can be introduced to highlight ongoing efforts in the field. MITRE, is a well-known catalogue of malicious techniques [28]. As a complementary knowledge framework, MITRE ATLAS includes tactics that are specific to AI [29]. Another knowledge base that provides a guideline for the mitigation of AI threats was proposed by Microsoft [30, 31]. Similarly, OWASP has presented guides to ensure the security of systems relying on AI, and more specifically, LLMs [32]. A comprehensive report of AI-related threats is presented by the European Union Agency for Network and Information System [33] (ENISA), which are further related to architectural AI assets in [34].
The third and most closely related literature group reports evidence of integrating the AI paradigm within threat modeling. The limited number of publications [15] (see TABLE I) connect potential risks to the elements generated throughout various phases of the life cycle of ML models, ranging from the initial requirements analysis to maintenance.
[12] applies conventional threat modeling consisting of data flow diagramming and STRIDE-based threat identification. While the methodology reports the successful mapping of a threat taxonomy to an illustrative model, the mapping process is carried out manually by experts. Furthermore, limitations such as limited results from a singular synthetic case study and no investigation on usability are acknowledged.
In [13], a gold standard dataset is used to evaluate the degradation of a model during the productive stage. A metric is proposed that quantifies the degradation loss, which could quantify the impact of a threat. However, focusing on the existing models might indicate that the method is not applicable during the design stage, which is critical for threat modeling.
A domain-specific threat model is created in [14], focusing on Open Radio Access Network (O-RAN) architectures. Thus, no generic approach is evaluated. The paper by [15] is the most closely related contribution to threat modeling of AI-based systems. It advocates for integrating threat modeling methodologies in AI security analysis and introduces the STRIDE-AI methodology, a tailored adaptation of the STRIDE framework for ML systems. The methodology assigns ML-specific interpretations to security properties, facilitating the identification of threats to AI assets. However, it involves a manual mapping process, lacking automation, which hinders scalability and adaptability to system changes. The methodology’s evaluation is based on a single use case without the involvement of participants, providing insights but not covering all challenges in diverse ML applications.
In summary, while one might argue that attacks on AI are not radically different from conventional cyber attacks, it is not clear how straightforward the creation of a threat model for AI is. More specifically, the guiding factors and the degree of automation, especially when creating a threat model by real users during the design stage of a system, is unclear, posing an opportunity for the development of a guiding tool oriented towards the design process of AI system architectures.

III Design and Implementation
To design and implement a threat modeling approach for AI-based systems, it is inevitable to map the architectural semantics of these systems to the threat modeling process. The architecture of the ThreatFinderAI prototype is visualized in Figure 1. At the top, a high-level overview of the threat modeling procedure is outlined. This process was leveraged in order to design the individual components that support the overall process and, in sum, provide automated threat modeling for AI systems. The architectural components are shown in the center, consisting of six key components supporting the approach of this work. To investigate the feasibility and effectiveness of that approach, a prototype was designed and implemented, as presented in the bottom.
Goal | Description | |
---|---|---|
1 | Objective Identification | Determine system security goals |
2 | Assessment | Identify system assets and interactions |
3 | Decomposition | Select relevant assets |
4 | Threat Identification | Categorize threats to assets |
5 | Identify Vulnerability | Analyze threats and determine vulnerability |
From the methodological perspective, the generic 5-step threat modeling process summarized in TABLE II was leveraged, allowing for a step-by-step approach to address the specific problem domain. For the objective identification step, literary analysis revealed the necessity to adopt the AI-specific proposal of security principles from [15, 34]. There, the traditional CIA principles (i.e., confidentiality, integrity, availability) are extended to include authorization and non-repudiation as key concepts. While the definition of important security goals may not be fruitful at this stage, it is crucial to ensure that the business relevance of the system to be developed is well understood [36].
In the second step, the system must be closely analyzed and understood from an architectural perspective. For this, the context for the threat modeling process is essential – whether for the design of a completely new architecture or a threat model is created for an existing system as part of a risk assessment. In this work, ThreatFinderAI relies on visual modeling of architectures. Hence, it is crucial to verify whether there are existing system diagrams and models. In any case, modeling and drawing the architecture of the AI-based system can help in comprehension. Here, it is essential to draw a holistic picture of the architecture, for which the guiding model of the AI life cycle from [34] can be helpful to elicit all activities and the systems involved. For example, even when using a pre-trained model, it is important to draw the data collection procedure to capture the whole attack surface, even though a service provider may perform it transparently.

In the third step, the architecture model serves as a means to identify relevant assets. Conceptually, these are the functional and data assets that are subject to the security goals. For example, if a healthcare project postulates that the confidentiality of the data is paramount, then it is vital that the training data (among other assets) is identified. As already mentioned, ThreatFinderAI takes a visual approach since it is assumed to be well-established for the development of software architectures. To solve this problem, stencils can support the annotation of software architecture diagrams – if each element is carefully annotated with metadata to identify a unique asset from a taxonomy, the diagrams can be analyzed in an automated manner. In the fourth step, the set of assets is used as an input to identify threat events that can impact those assets. For example, the presence of training data which is managed by an untrusted actor, could indicate the vulnerability of the system to a data poisoning attack. As will be discussed for the implementation of a prototype, it is critical to consider the literature for this step since architects may not have the resources to develop or research novel threats on their own.
In the final steps, the list of threats are analyzed, potentially revealing threats whose impact cannot be accepted or ignored by the risk sensitivity of the surrounding business context. These steps require the adoption of technical, organizational, or strategic mitigation controls. This step could be guided and automated by specific guidelines. However, the elicitation of security controls is out of scope for threat modeling.
III-A Prototype Implementation
To implement the components proposed for the ThreatFinderAI threat modeling approach, the components outlined at the bottom of Figure 1 were implemented and integrated into a web-based solution. Starting from the front end, the user interacts with a web-based graphical user interface implemented as a Single-page Application (SPA) using React.js [37]. First, on the /home page, the user uploads a previously developed diagram and selects the essential security properties for the business case. If no diagram exists, another page hosts the interactive diagram editor component. This component consists of an inline HTML frame loading the diagrams.net diagram editor [27]. Aside from the functionality inherited by the editor, the web application communicates with the frame to pre-load the stencil libraries as well as automatically load and continuously store the model in the browser’s storage.
Furthermore, a bespoke stencil library was crafted to simplify and guide the asset modeling stage. The stencil library provides one stencil for each asset identified from the comprehensive report provided by ENISA [33], which serves to build an extensive yet extensible ontology that gives detail on threat events, actors, and assets. In total, 72 stencils are formalized into an XML file, allowing the annotation of metadata to analyze the resulting diagram automatically. As partially shown in the left modal of Figure 2, the stencils are grouped into six categories, encouraging modeling not only static software elements but also processes and actors.
After the users freely and interactively model the system architecture, the diagram can be transferred to the Python backend, where it is analyzed to suggest relevant threats automatically. To do so, the diagram is exported from the editor and sent to the backend as an XML file over HTTPS. There, it is first parsed to retrieve all visual elements from the diagram. For all elements carrying annotations of the stencil library, the contextualized assets are extracted using a JSON representation of the ontological AI asset knowledge base extracted from the ENISA report. This knowledge base is semantically connected to another knowledge base, which holds 96 formalized threat descriptions, each contextualized with the connecting AI asset, category, title, description, and the potential impact on the security objectives. Thus, based on the derived assets, the knowledge base is queried, and a threat report is generated and returned to the client, detailing for each asset a set of potentially relevant threats.
In the front end, the threat report is displayed. Here, the initial selection of the security goals comes into play by focusing the reporting on threats that primarily relate to this objective. Given that threat modeling is a collaborative and interactive endeavor requiring collaboration with the business and technical stakeholders, the final threat model can be exported in PDF form. Moreover, the threat model diagrams can be exported and extended later on, using the ThreatFinderAI tool or by uploading it to draw.io, hence providing the freedom to reuse the models in other tools. For example, another component of a toolkit could leverage these models to quantify the threats or to suggest control mechanisms.
IV Evaluation
Developing ThreatFinderAI had the goal of investigating whether a supporting tool can guide and automate the threat identification stage when modeling threats around AI-based systems. To assess the effectiveness of ThreatFinderAI, the problem of a lacking ”perfect” model arises against which the tool’s output can be tested. Furthermore, it must be acknowledged that threat modeling is, in practice, still highly centered around humans developing threat models to create value [40]. Thus, the main question guiding the evaluation of ThreatFinderAI was whether experts with only limited cybersecurity expertise could identify relevant threats in a practical scenario and how they perceive the usability of doing so.
IV-A Methodology
Thus, a scenario-driven field experiment was conducted. First, a threat model is created by cybersecurity and data science experts working together to develop a secure architecture for an AI system in the medical field. Although it cannot be proven that the expertise of the experts leads to modeling threats that are relevant and sufficiently exhaustive, this model provides a baseline against which the threat model created by non-security experts leveraging the tool in the second step.
The practical context of the model that was created in both steps was the ongoing development of a platform to collect, store, share, and train models from data. The platform is a healthcare platform tailored for clinical data analysis by engineers, practitioners, and researchers. It aims to provide a robust data-gathering system with controlled data synthesis to facilitate experimentation and modeling in this healthcare domain. The platform prioritizes data privacy and security by incorporating advanced anonymization techniques, attribute-based privacy measures, and reliable tracking systems. The main functionalities of the platform are organized into three primary modules (i.e., Model Training, Model Auditor, and Data Synthesizer), seven supporting modules (i.e., Data Anonymization Toolkit, Data Uploader, Cross-Borders Database, Dataset Explorer, Dataset Builder, Dataset Evaluator, and Federated Learning), and three crosscutting modules (i.e., Security Control, User Interface, and Orchestrator), addressing diverse needs and requirements within the healthcare analytics landscape. As one might expect from the presence of anonymization technology, data confidentiality, and privacy are of utmost importance to business representatives. Not only may data breaches lead to regulatory fines, but the overall trust in the data-sharing platform is a crucial property to stimulate data collection from various parties.
IV-B Execution
In the first stage, four experts collaborated to create a threat model of the platform architecture: two with conventional cybersecurity expertise, one with specific AI security knowledge, and the fourth one with a data science background. To do so, they leveraged diagrams.net [27] to draw the system architecture, its boundaries, potential actors, and relevant threats. In total, ten areas of concern were identified and closely investigated. Here, it is important to state that all four experts had to rely on external threat information, which was surveyed manually in the form of reports and academic literature. Based on this, potential threats within the system were then identified and linked to specific threat actors, including malicious platform users, external threats, infrastructure administrators, and automated external entities like malware. These identified threats span a broad spectrum of security concerns. The experts have pinpointed 44 threats throughout the system, with certain threats recurring across multiple components. Here, it’s important to highlight that the ThreatFinderAI’s database, containing 96 distinct threats, covers all the threats identified by the experts and extends them to more specific threats, offering a comprehensive overview, which is often desired, such as when building attack trees [41].
In the second step, seven participants took part in a threat modeling workshop using ThreatFinderAI, receiving a video-based tutorial on how to use the tool and information about the previously described platform architecture. Although there certainly is a complexity involved in transferring knowledge from the scenario to the participants, the experiment was executed to control the architecture used for threat identification. Furthermore, in a threat modeling workshop, it is not unrealistic to see a transfer of expertise involved, for example, when a team of data science software architects is tasked to evaluate an existing system. After applying the tool to the scenario, the threat models were collected, and the participants were guided through a questionnaire to understand how the usage of ThreatFinderAI is perceived.
# | Educational Background | AI Knowledge |
---|---|---|
1 | Master of Data Science | Practical experience |
2 | Master of Data Science | Practical experience |
3 | Bachelor of Science Software Systems | Theoretical knowledge |
4 | Bachelor of Science Software Systems | Theoretical knowledge |
5 | Bachelor of Science Information Systems | Little to no understanding |
6 | Master of Science Pharmacy | Little to no understanding |
7 | Master of Law | Little to no understanding |
At the beginning of the questionnaire, the background and expertise of the participants were targeted. As visible from TABLE III, participants from different backgrounds were selected (assessed by the highest completed academic degree). Notably, none of the participants indicated knowledge of cybersecurity and only two out of five participants with a Computer Science degree majored in Data Science. These participants were also the only ones who stated to have practical knowledge of working with AI, while the remaining considered themselves to be theoretically knowledgeable about AI system architecture.
Next, it was investigated whether the design assumption that computer scientists are already familiar with the diagram editor diagrams.net was justified. Based on statements shared on a Likert scale, all participants with a technical background considered themselves familiar with the tool. In a similar question on related tools such as Microsoft Threat Modeling Tool or OVVL, none of the participants expressed familiarity.
To understand the participants’ perceived ability to use the tool, additional questions investigated this aspect. All participants felt successful in navigating the tool. Participant Six faced challenges during the asset identification step, acknowledging a limited understanding of AI technology from an architectural perspective. When rating the clarity of the task instructions, six out of seven considered them at least sufficiently clear. With respect to the concrete scenario and architecture provided, three participants expressed it was easy to understand, while the remaining four expressed it was sufficiently understandable.
# | Educational Background | Score |
1 | Master of Data Science | 55 |
2 | Master of Data Science | 70 |
3 | Bachelor of Science Software Systems | 85 |
4 | Bachelor of Science Software Systems | 52.5 |
5 | Bachelor of Science Information Systems | 52.5 |
6 | Master of Science Pharmacy | 75 |
7 | Master of Law | 45 |
Average | 62.14 |
Concluding the questionnaire, the perceived usability was assessed by means of the system usability scale (SUS), which provides a simple, standardized scoring using ten questions [42]. The resulting scores are shown in TABLE IV. When all participants are included, the average score evaluates to acceptable usability. Looking into participants One to Five, who all hold computer science expertise, Participant One’s score stands out negatively. Although SUS in itself is not diagnostic, the answers provided appear conflicting. For example, the participant indicated that the tool might require assistance from a technical person while also being easy to learn. Based on additional open feedback collected, Participant One expressed a dislike for the diagram editor leveraged in ThreatFinderAI, while otherwise positively acknowledging the features of ThreatFinderAI.
IV-C Analysis
Since the threat modeling process is guided and the key steps are automated, artifacts, such as the annotated architecture diagrams and the resulting tabular threat models, were recorded and analyzed by the experts who conducted the initial analysis. In the security objective selection, it was observed that both the participants and the expert group mostly agreed on the relevant properties.
Based on the expert assessment, Participants One, Two, Three, and Six successfully identified all relevant threats, underscoring the advantage of AI knowledge in threat modeling for AI-related systems. While this does not come as a surprise, given that they are the most likely target group, it is surprising that even a layperson, Participant Six, achieved this, while Participants Four, Five, and Seven missed multiple threats.
The participants with a background in data science, therefore, discovered all relevant threats (and more granular variants) from the expert-based model. While the application is effective in that scenario, the efficiency of the application still suffers from the common concern of yielding a significant number of false positives compared to the 44 threats discovered by the experts. Thus, the overall threat modeling procedure still requires a discussion of the threat identification step, as is the case in the established threat modeling processes.
In summary, participants effectively identified all potential threats using the architectural model, addressing previously identified gaps in the literature. However, efficiency could be improved by further filtering threats, potentially through subcategories of security objectives and requirements. Moreover, exploring participants’ prioritization of threats and inferred vulnerabilities could offer valuable insights. However, this was beyond the scope of the current study and remains a challenge for future research. From the experiment, it can be observed that practitioners with a data science background could use this tool as guidance for an initial threat identification step. The resulting threat model would require further analysis either by leveraging additional tools or by involving security experts.
V Summary and Future Work
Due to the necessity of considering the architectures and security concerns of AI-based systems in current system engineering, this paper proposes ThreatFinderAI. The approach aligns the AI security domain with the established threat modeling process through a guiding prototype. The prototype includes a front end to collect the most relevant security objectives and enables architectural diagramming with a bespoke stencil library of AI assets. In the back end, the diagrams are automatically analyzed against an established AI security report, which is transformed into a computable ontology.
To understand the practicability, effectiveness, and usability of the prototype, a user-centric experiment confronted real users with a real-world AI system architecture. The results show the feasibility of an AI threat modeling approach and prototype, but also the effectiveness of non-security experts identifying threats. However, the results from the experiment also demonstrate that threat modeling may not be trivial for practitioners without cybersecurity expertise. Specifically, further research is needed to improve the usability and to support the final stages of threat analysis, where threats are prioritized and mitigations are designed. Furthermore, although the results are obtained from practitioners working on a real system architecture, their expressiveness is limited by the small number of participants working on a singular case.
In the future, the usability and effectiveness will be further assessed and tested with a broader set of participants and scenarios. Moreover, to improve the threat analysis stages, the applicability of threat interpretation and quantification approaches from established domains such as cybersecurity economics or risk management will be investigated.
Acknowledgments
This work has been partially supported by (a) the Swiss Federal Office for Defense Procurement (armasuisse) with the CyberMind and RESERVE (CYD-C-2020003) projects and (b) the University of Zürich UZH.
References
- [1] L. Perri, Gartner Inc., “What’s New in Artificial Intelligence from the 2023 Gartner Hype Cycle,” August 2023, https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle, Last Visit January 2024.
- [2] Nokia Corporation, “6G explained,” January 2024, https://www.nokia.com/about-us/newsroom/articles/6g-explained, Last Visit January 2024.
- [3] K. Hu, Reuters, “ChatGPT sets record for fastest-growing user base - analyst note,” February 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/, Last Visit January 2024.
- [4] A. Kucharavy, Z. Schillaci, L. Maréchal, M. Würsch, L. Dolamic, R. Sabonnadiere, D. P. David, A. Mermoud, and V. Lenders, “Fundamentals of Generative Large Language Models and Perspectives in Cyber-Defense,” arXiv preprint https://arxiv.org/abs/2303.12132, March 2023.
- [5] P. Dixit, engadget, “A ’silly’ attack made ChatGPT reveal real phone numbers and email addresses,” November 2023, https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html, Last Visit January 2024.
- [6] X. Wang, J. Li, X. Kuang, Y. an Tan, and J. Li, “The security of machine learning in an adversarial setting: A survey,” Journal of Parallel and Distributed Computing, vol. 130, pp. 12–23, 2019.
- [7] L. Lyu, H. Yu, J. Zhao, and Q. Yang, Threats to Federated Learning. Cham: Springer International Publishing, 2020, pp. 3–16.
- [8] N. Akhtar and A. Mian, “Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey,” IEEE Access, vol. 6, pp. 14 410–14 430, 2018.
- [9] OWASP, “Software Assurance Maturity Model,” September 2023, https://owasp.org/www-project-samm/, Last Visit January 2024.
- [10] von der Assen, J. and Franco, M. F. and Killer, C. and Scheid, E. J. and Stiller, Burkhard, “CoReTM: An Approach Enabling Cross-Functional Collaborative Threat Modeling,” in IEEE International Conference on Cyber Security and Resilience (CSR 2022), Rhodes, Greece, July 2022, pp. 1–8.
- [11] L. Mauri and E. Damiani, “Stride-ai: An approach to identifying vulnerabilities of machine learning assets,” in 2021 IEEE International Conference on Cyber Security and Resilience (CSR). IEEE, 2021, pp. 147–154.
- [12] C. Wilhjelm and A. A. Younis, “A Threat Analysis Methodology for Security Requirements Elicitation in Machine Learning Based Systems,” in 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C), 2020, pp. 426–433.
- [13] L. Mauri and E. Damiani, “Estimating Degradation of Machine Learning Data Assets,” ACM Journal of Data and Information Quality (JDIQ), vol. 14, no. 2, pp. 1–15, 2021.
- [14] E. Habler, R. Bitton, D. Avraham, D. Mimran, E. Klevansky, O. Brodt, H. Lehmann, Y. Elovici, and A. Shabtai, “Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN),” arXiv preprint https://arxiv.org/abs/2201.06093, March 2023.
- [15] L. Mauri and E. Damiani, “Modeling threats to AI-ML systems using STRIDE,” Sensors, vol. 22, no. 17, 2022.
- [16] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Artificial intelligence and statistics, 2017, pp. 1273–1282.
- [17] R. S. Sangwan, Y. Badr, and S. M. Srinivasan, “Cybersecurity for AI Systems: A Survey,” Journal of Cybersecurity and Privacy, vol. 3, no. 2, pp. 166–190, 2023.
- [18] L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu, and F. Roli, “Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization,” in Proceedings of the 10th ACM workshop on artificial intelligence and security, 2017, pp. 27–38.
- [19] B. Nelson, M. Barreno, F. J. Chi, A. D. Joseph, B. I. Rubinstein, U. Saini, C. Sutton, J. D. Tygar, and K. Xia, “Exploiting Machine Learning to Subvert Your Spam Filter,” LEET, vol. 8, no. 1-9, pp. 16–17, 2008.
- [20] Ilmoi, “Poisoning attacks on Machine Learning,” July 2019, https://towardsdatascience.com/poisoning-attacks-on-machine-learning-1ff247c254db/, Last Visit January 2024.
- [21] J. Natarajan, “Cyber Secure Man-in-the-Middle Attack Intrusion Detection Using Machine Learning Algorithms,” in AI and Big Data’s Potential for Disruptive Innovation, January 2020, pp. 291–316.
- [22] R. N. Reith, T. Schneider, and O. Tkachenko, “Efficiently Stealing your Machine Learning Models,” in Proceedings of the 18th ACM Workshop on Privacy in the Electronic Society, 2019, pp. 198–210.
- [23] CAIRIS, “Threat Modelling, Documentation and More,” 2022, https://cairis.org/cairis/tmdocsmore/, Last Visit January 2024.
- [24] Threatspec, “Threatspec,” June 2019, https://threatspec.org/, Last Visit January 2024.
- [25] SecurityCompass, “SD Elements Datasheet v5.17,” 2023, https://docs.sdelements.com/release/latest/guide/docs/datasheet.html/, Last Visit January 2024.
- [26] Tutamantic, “Feauture — Tutamantic,” January 2021, https://www.tutamantic.com/page/features, Last Visit January 2024.
- [27] JGraph Ltd, “Diagram Software and Flowchart Maker,” https://www.diagrams.net/, Last Visit January 2024.
- [28] The MITRE Corporation, “MITRE ATT&CK (Adversarial Tactics, Techniques, and Common Knowledge),” https://attack.mitre.org/, Last Visit January 2024.
- [29] ——, “MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems),” https://atlas.mitre.org/, Last Visit January 2024.
- [30] Microsoft Corporation. Threat Modeling for AI/ML Systems and Dependencies. https://learn.microsoft.com/en-us/security/engineering/threat-modeling-aiml, Last Visit January 2024.
- [31] A. Marshall, J. Parikh, E. Kiciman, and R. Kumar, “Threat Modeling AI/ML Systems and Dependencies,” Security documentation, 2019.
- [32] OWASP, “AI Security and Privacy Guide,” https://owasp.org/www-project-ai-security-and-privacy-guide/#how-to-deal-with-ai-security, Last Visit January 2024.
- [33] European Union Agency for Cybersecurity (ENISA), “Securing Machine Learning Algorithms,” 2021.
- [34] ——, “Artificial Intelligence Cybersecurity Challenges, Threat Landscape for Artificial Intelligence,” 2020.
- [35] S. Myagmar, A. J. Lee, and W. Yurcik, “Threat Modeling as a Basis for Security Requirements,” in Symposium on Requirements Engineering for Information Security (SREIS), August 2005.
- [36] M. F. Franco, F. Künzler, J. von der Assen, C. Feng, and B. Stiller, “RCVaR: an Economic Approach to Estimate Cyberattacks Costs using Data from Industry Reports,” arXiv preprint https://arxiv.org/abs/2307.11140, July 2023.
- [37] Meta Platforms, “React,” https://react.dev/, Last Visit January 2024.
- [38] Sharif Jamo, “AiThreats,” 2024, https://github.com/JSha91/AiThreats.
- [39] Sharif Jamo and von der Assen Jan, “ThreatFinder,” 2024, https://www.csg.uzh.ch/threatfinder/.
- [40] Threat Modeling Manifesto Working Group, “Threat Modeling Manifesto,” January 2024, https://www.threatmodelingmanifesto.org, Last Visit January 2024.
- [41] A. Shostack, Threat Modeling: Designing for Security. John Wiley & Sons, 2014.
- [42] GitLab Inc., “System Usability Scale (SUS),” 2023, https://handbook.gitlab.com/handbook/product/ux/performance-indicators/system-usability-scale, Last Visit January 2024.