Blog Post
Navigating the Shifting Sands of AI Regulation: A Guide For Compliance Officers

The change in U.S. Presidential administration has introduced a sea change in the U.S. government’s approach to artificial intelligence (AI) regulation. While reduced regulatory burden and new innovation are anticipated, impending changes may also trigger new challenges for chief compliance officers (CCOs) tasked with understanding and keeping pace with evolving regulation and compliance-related AI risks. Adopting controls and foundational governance best practices will help to mitigate these risks and help CCOs and corporate stakeholders remain nimble amid ongoing regulatory uncertainty.
The October 2023 directive previously issued by President Biden tasked organizations with taking actions to minimize bias in AI outputs, implement data privacy measures and provide greater transparency around the use and testing of AI platforms. CCOs were expected to integrate these mandates into their organizations’ compliance frameworks by establishing internal protocols to ensure adherence to federal AI standards. Related activities included performing ongoing assessments of AI systems to identify and rectify compliance issues, educating employees and stakeholders about the ethical use of AI, and maintaining detailed records of compliance efforts and outcomes to demonstrate accountability during federal reviews and audits.
Although the recent replacement of the previous Executive Order with new policy may appear to remove federally mandated safety and transparency requirements, federal AI compliance rules have not entirely gone away. Therefore, CCOs need to monitor for any changes to the U.S. Department of Justice Evaluation of Corporate Compliance Programs (ECCP) guidance. The most recent update to this guidance (September 2024) addresses the integration and oversight of AI within corporate compliance broadly, emphasizing governance, risk assessment and accountability in AI deployment. Unless modified, under the ECCP, the DOJ expects that companies and CCOs will proactively integrate AI risk management into their compliance programs to mitigate legal risks, avoid stringent penalties and maintain corporate integrity.
If some have interpreted a deregulatory signal, the compliance picture on the ground is decidedly more mixed. Should federal oversight recede, individual states may begin to impose their own AI governance rules, with many likely to model after the numerous California AI acts. Organizations with a global footprint will also need to adhere to emerging global standards such as the EU AI Act, which took effect on February 2, 2025 and emphasizes principles of safety, transparency and explainability, and carries the potential for fines of up to 7% of a company’s annual global revenue.
Regulated firms will also need to ensure that their use of AI comports with fiduciary obligations and rules against unfair or deceptive practices. Regardless of how U.S. regulatory enforcement evolves in the years ahead, CCOs must prepare their organizations for greater complexity in AI regulation across geographies. Doing so will be integral to avoiding penalties, maintaining market access and preserving trust.
CCOs can accomplish this by adopting proactive efforts to mitigate the operational and reputational risks that accompany AI use. For example, applying greater scrutiny to potential off-the-shelf AI solutions or partnerships can mitigate risks posed by unscrupulous actors that may become emboldened, as is common during periods of fast-moving disruption.
By adopting controls around ethical use, data privacy and human governance, organizations can help minimize bias and hallucination risks while improving accountability for AI-informed conclusions or outcomes. Working with a trusted partner to help assess the organization’s use of AI — whether inside the compliance enterprise or more broadly — CCOs can develop tailored controls that reflect their organization’s data and technology landscape.
For guidance in constructing a controls framework, CCOs can look to the current standards and guidelines maintained by the National Institute of Standards and Technology (NIST), which provide a robust, practical foundation for risk management, ethical considerations and technical governance in AI systems. These standards can serve as benchmarks for demonstrating compliance with U.S. and international AI regulations, as well as helping to satisfy DOJ requirements for effective risk management.
The AI regulatory picture continues to create confusion and uncertainty that will likely persist given the pace at which the technology is evolving. CCOs and corporate stakeholders can benefit from analysis of the risks inherent to AI technology and use, irrespective of the regulations of the moment. By undertaking risk assessments, establishing the appropriate controls to mitigate risk and continually working to build data and technology maturity in the enterprise, CCOs can help ensure their organizations are well-positioned for future AI regulations.
Related topics:
The views expressed herein are those of the author(s) and not necessarily the views of FTI Consulting, its management, its subsidiaries, its affiliates, or its other professionals.