System AI w kontekście: modelowania zagrożeń, zarządzania jego ryzykiem i wymagań regulacyjnych
DOI:
https://doi.org/10.34767/SIMIS.2024.03.04Słowa kluczowe:
Sztuczna inteligencja, Uczenie maszynowe, modelowanie zagrożeń, zarządzanie ryzykiem, AI ActAbstrakt
Praca przedstawia kompleksowe podejście do modelowania zagrożeń i zarządzania ryzykiem w systemach SI (Sztuczna Inteligencja). Dostarczając metody, narzędzia i wskazówki dla organizacji w budowaniu odpornych na zagrożenia systemów SI, zgodnych z prawem. Proponowane systemowe podejście umożliwia identyfikację i minimalizację następstw zagrożeń w systemach SI, otwierając nowe kierunki badań w dziedzinie bezpieczeństwa sztucznej inteligencji.
Bibliografia
Sangwan R., Badr Y., Srinivasan S. Cybersecurity for AI systems: A survey. Journal of Cybersecurity and Privacy, 3(2), 2023, 166-190.
Bogdanov D., Etti P., Kamm L., Ostrak A., Pern T., Stomakhin F., Toomsalu M., Valdma S.M., Veldre A. Risks and controls for artificial intelligence and machine learning systems. Version 1.0 [Report]. Estonian Research Institute at Tallinn University of Technology (RIA). 2024.
Knockaert M., Everarts de Velp S., Norouzian M.R., Palacios C., Martínez C., Orduña R., Etxeberria X., Gil A., Pawlicki M., Choras M. (2021). D7.1: AI systems threat analysis mechanisms and tools [Report]. SPARTA project number 830892.
Liebl A., Klein T. AI Act: Risk classification of AI systems from a practical perspective [Report]. 2023. Applied AI Initiative.
Targowski A. Informatyka: modele systemów i rozwoju. Warszawa: Państwowe Wydawnictwo Ekonomiczne, 1980.
Pape N., Mansour C. PASTA Threat Modeling for Vehicular Networks Security. In 2024 7th International Conference on Information and Computer Technologies (ICICT) (pp. 474-478). IEEE. https://doi.org/10.1109/ICICT62343.2024.00083, 2024.
Stingelová B., Thrakl C.T., Wrońska L., Jedrej Szymankiewicz S., Khan S., Svetinovic D. User-Centric Security and Privacy Threats in Connected Vehicles: A Threat Modeling Analysis Using STRIDE and LINDDUN. In 2023 IEEE Intl Conf on Dependable, Autonomic and Secure Computing. IEEE 2023 https://doi.org/10.1109/DASC/PiCom/CBDCom/Cy59711.2023.10361381
Azam N., Michala L., Ansari, S., Truong N. B. Data Privacy Threat Modeling for Autonomous Systems: A Survey From the GDPR's Perspective. IEEE Transactions on Big Data, 2023, 9(2), 388-414.
Mauri L., Damiani E. Modeling Threats to AI-ML Systems Using STRIDE. Sensors, 2022, 22(1), 1.
Tete S. Threat Modeling and Risk Analysis for Large Language Model (LLM)-Powered Applications. arXiv 2024.
von der Assen J., Sharif J., Feng C., Killer C., Bovet G., Stiller B. Asset-Centric Threat Modeling for AI-Based Systems. In 2024 IEEE International Conference on Cyber Security and Resilience (CSR) (pp. 437-444). IEEE 2024.
Mauri L., Damiani E. STRIDE-AI: An Approach to Identifying Vulnerabilities of Machine Learning Assets. In 2021 IEEE Cybersecurity Development Conference (CSR). IEEE 2021. https://doi.org/10.1109/CSR51186.2021.9527917
Sharif J. Design and Implementation of a Threat Modeling Approach for AI-based Systems (Master's thesis). University of Zurich, Zurich, Switzerland 2023.
Tarandach I., Coles M.J. Threat Modeling: A Practical Guide for Development Teams. O'Reilly Media, Inc. 2021.
Shostack A. Threat Modeling: Designing for Security. John Wiley & Sons, Inc. 2014.
Sportelli M. The AI Act - A Policy Exploration. 2024. DOI: 10.13140/RG.2.2.11397.15847/1.
Ehsan U., Riedl M. O. Explainability pitfalls: Beyond dark patterns in explainable AI. Patterns, 2024, 5(6), 100971. https://doi.org/10.1016/j.patter.2024.100971.
Simchon A., Edwards M., Lewandowsky S. The persuasive effects of political microtargeting in the age of generative artificial intelligence. PNAS Nexus, 2024, 3(2), pgae035. https://doi.org/10.1093/pnasnexus/pgae035
Loefflad C., Grossklags J. How the Types of Consequences in Social Scoring Systems Shape People's Perceptions and Behavioral Reactions. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24) (pp. 1515-1530). Association for Computing Machinery, 2024. https://doi.org/10.1145/3630106.3658986
Mitka A. The use of “real-time” remote biometric identification systems for law enforcement : comments in light of legislative work on the Artificial Intelligence Act. Problemy Współczesnego Prawa Międzynarodowego Europejskiego I Porównawczego, 2023, 21, 183–202. https://doi.org/10.26106/q3ta-bv90
Nair A., Greeshma M. R. Mastering Information Security Compliance Management: A Comprehensive Handbook on ISO/IEC 27001:2022. Packt Publishing Ltd. 2023.
Ebers M. Truly Risk-based Regulation of Artificial Intelligence How to Implement the EU’s AI Act. European Journal of Risk Regulation, 2024 1–20. doi:10.1017/err.2024.78
Novelli C., Casolari F., Rotolo A., Taddeo M., Floridi L. AI risk assessment: A scenario-based, proportional methodology for the AI Act. Digital Society, 2024, 3(1), 13. https://doi.org/10.1007/s44206-024-00095-1
Muller B., Roth D., Kreimeyer M. Survey of the Role of Domain Experts in Recent AI System Life Cycle Models. In NORDDESIGN 2024 (pp. 256-265).
Steidl M., Golendukhina V., Felderer M., Ramler, R. Automation and Development Effort in Continuous AI Development: A Practitioners’ Survey. In 2023 IEEE Symposium on Software Engineering for AI (SEAA). IEEE 2023. https://doi.org/10.1109/SEAA60479.2023.00027
Dev J., Akhuseyinoglu N., Kayas G., Rashidi B., Garg V. Building Guardrails in AI Systems with Threat Modeling Digital Government: Research and Practice 2024. https://doi.org/10.1145/3674845
Pobrania
Opublikowane
Numer
Dział
Licencja
Utwór dostępny jest na licencji Creative Commons Uznanie autorstwa – Na tych samych warunkach 4.0 Miedzynarodowe.