IMPACT OF AI ON REGULATORY COMPLIANCE IN INFORMATION SECURITY
Keywords:
Artificial Intelligence, Regulatory Compliance, Information Security, AI Risk Management, Governance Frameworks, NIST AI RMF, Machine Learning, Natural Language Processing, Ethical AI, Algorithmic Bias, Transparency, Accountability, Healthcare Compliance, Financial Compliance, Risk Mitigation, Data PrivacyAbstract
Artificial intelligence (AI) has brought about a significant shift in information security. It makes processes automatic, helps manage risks, and makes decision-making faster and better as things happen. This research paper discusses how AI can make it easier to stick to regulations but also points out problems like unfair algorithms, unclear processes, and significant ethical questions. With the progress made in machine learning and what many call NLP or Natural Language Processing technology, organizations can now get a handle on and manage intricate rules before they become an issue. This is an essential asset in sectors overwhelmed by regulations like healthcare and finance. AI has much to offer, but paying attention to its challenges is essential, like the mystery of how "black box" algorithms work and the need for strong systems and competent people to make it work well. This research paper focuses on discovering what problems arise and suggests ways to manage them. It discusses the NIST AI Risk Management Framework, which aims to reduce risks while following laws and morals. It is not hard to notice AI's adherence to guidelines in the real world. Spotting falsehoods within financial firms and handling confidential information in healthcare settings showcases this. This research paper highlights how important it is to focus on being fair, transparent, and responsible regarding AI rules. It shows that working together - regulatory authorities, technology experts, and people who know much about ethics - can create flexible rules. These rules ensure that new inventions fit well with what society thinks is right.
References
E. Tan, M. Petit Jean, A. Simonofski, T. Tombal, B. Kleizen, M. Sabbe, L. Bechoux, and P. Willem, Artificial intelligence and algorithmic decisions in fraud detection: An interpretive structural model, Data & Policy, 5, 2023, e25.
European Union, General Data Protection Regulation (GDPR), Official Journal of the European Union, L119, 2016, 1-88. Retrieved from https://eur-lex.europa.eu/eli/reg/2016/679/oj.
United States, Health Insurance Portability and Accountability Act of 1996 (HIPAA), Public Law 104-191, 1996. Retrieved from https://www.hhs.gov/hipaa.
M. H. Sarker and R. Nowrozy, AI-driven cybersecurity: An overview, security intelligence modeling, and research directions, SN Computer Science, 2, 2021, 173.
J. Walters, D. Dey, D. Bhaumik, and S. Horsman, Complying with the EU AI Act, arXiv preprint arXiv:2307.10458, 2023.
D. Korobenko, A. Nikiforova, and R. Sharma, Towards a privacy and security-aware framework for ethical AI: Guiding the development and assessment of AI systems, arXiv preprint arXiv:2403.08624, 2024.
National Institute of Standards and Technology, Artificial Intelligence Risk Management Framework (AI RMF 1.0), NIST, 2023, Retrieved from https://www.nist.gov.
B. W. Wirtz, J. C. Weyerer, and C. Geyer, Artificial intelligence and the public sector—applications and challenges, International Journal of Public Administration, 42(7), 2019, 596-615.
Published
Issue
Section
License
Copyright (c) 2025 Pranav Mani Tripathi (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.