In an effort to manage the risks associated with artificial intelligence, the White House recently issued an executive order on AI. The executive order highlights risks such as AI-related fraud, bias, and national security concerns, emphasizing the need for comprehensive risk assessments.
We heard from Julie Myers Wood, CEO of Guidepost Solutions, on what this executive order means in terms of compliance and risk management implications for companies engaging with AI technologies.
This executive order has significant compliance and risk management implications for companies using AI technologies. Could you share some practical advice on how businesses can proactively manage these implications to avoid potential legal, reputational, and operational challenges?
Before adopting or expanding the use of AI technologies, companies should perform comprehensive AI-specific risk assessments, leveraging professionals with appropriate knowledge and experience. For both internally developed and third-party AI technology, these evaluations should include safety, security, bias, ethics, and compliance with federal regulations, and should be based on industry standard frameworks such as the NIST AI Risk Management Framework. When companies leverage third-party providers to provide AI technology, the third-party provider itself should also be assessed for risks such as cybersecurity, business continuity, and privacy, in addition to the provider’s adherence to applicable regulatory standards.
Given the rapidly evolving regulatory environment surrounding AI, what specific recommendations do you have for companies when it comes to evaluating and procuring AI-based products and services, particularly those offered by third-party providers, to ensure they remain in compliance with federal rules and regulations?
It is also critical that companies adopting AI establish robust governance structures to oversee AI usage, focusing on ethical practices, risk management, and legal compliance. Once established, governance structures must then be continuously maintained to ensure adaptation given the rapidly evolving AI regulatory landscape. This requires that businesses remain informed about changes in laws and regulations, technologies, and other developments that may impact applicable risks.
How do you see AI compliance evolving in 2024? How should companies be thinking about their implementations?
Looking into 2024, AI compliance is likely to become both more critical and more complex as technologies and regulations evolve. Companies should focus on adaptable and scalable AI pathways that can effectively align with emerging compliance requirements. This involves investing in ongoing training, risk assessment tools, and partnerships with expert compliance advisors to stay ahead in a dynamic regulatory environment.
Comments