Building on US Securities and Exchange Commission (SEC) Chair Gary Gensler’s speech at Yale Law School on February 13 about AI misuse for fraudulent purposes, Department of Justice (DOJ) Deputy Attorney General Lisa Monaco, speaking at the University of Oxford, indicated stiffer sentences for offenses that are made significantly worse by the misuse of AI. And if existing enhancements to sentencing guidelines are not adequate, the DOJ will reform them.
These two speeches not only indicate the gravitas that the US is placing on AI risk but also that regulatory bodies acknowledge that AI amplifies malicious intent. AI will consequently lead to a multiplier effect on penalties. The bigger question is how to apply these new guidelines and how they affect enterprises. The logical starting point would be social media and advertising platforms whose algorithms may be promoting misleading and fraudulent messaging. Over the past decade, however, this middle-man zone has largely been blamed and shamed without significant consequences. The goal of the DOJ and SEC is clearly to go straight to those inflicting the harm.
This takes us back to the big question: What does this mean for the typical enterprise? While AI leaders don’t have all the expertise to navigate a complex and emerging regulatory landscape, there are several key preparatory steps to take:
- Take stock of AI policies and practices related to existing regulations. While the statements of the DOJ and SEC were related to fraud, the overall regulatory stance is that laws exist, and AI is merely a tool to execute the intent. Based on the White House’s executive orders on AI, regulatory bodies are not only addressing enforcement of existing laws but beginning to align penalties to harm across all regulations. In highly regulated industries such as life sciences with strict go-to-market rules, or less regulated industries whose primary concern is privacy and spam, AI governance teams need to engage risk teams to identify gates, audits, and accountability, as financial and reputation penalties from missteps can increase.
- Align business values, ethics, and the spirit of regulations. AI is now coming into the headlights of regulators. There is little legal precedent, and the first cases will set the example. The cost of litigation of being the first caught in the AI regulatory web can be significant. To avoid missteps, AI governance teams need to have clarity on business values and ethics and how those are applied within the enterprise and in context of AI use. In addition, regulators are telegraphing the spirit of the laws (e.g., privacy, safety, equal opportunity, IP protection, etc.) as a deterrent. Without precedent, the prudent approach is to compare and contrast not only AI use but how the values and ethics of the company are brought into AI-based applications and its alignment to regulatory guidance.
- Open the AI spigot incrementally. Most organizations have taken a slow road to adopting generative AI, or even traditional AI, for outward-facing applications. With companies such as TurboTax and Rite Aid hitting the news due to poorly tested or improperly used AI in their customer engagement, internal use cases can be the road to incremental success. But even small internal use of AI (such as a loan approval engine helping an internal auditor) have consequences if they impact the decision. AI governance teams must be integrally involved at all stages of AI development lifecycle. Building small and iteratively scaling will lead to less regulatory risk.
- Test, audit, contain, and optimize continuously. AI governance dashboards that only show AI model regulatory compliance or check the box for process compliance are not enough. Enforcement of AI regulations will increase as regulators become more adept and knowledgeable about AI’s influence. This changing climate means that AI governance leaders must pivot AI risk posture from reactive to proactive and create business intelligence for actions. Test plans that monitor current and planned AI benchmarks provide a window to AI expansion. Audits must span stewardship and delivery workflows as well as model impacts. Measures are needed to quickly contain and shut off AI if thresholds are exceeded to mitigate amplification. Monitoring and control mechanisms need to continuously optimize AI applications and internal practices.
For more insight into the quilt of US regulations, read Navigate The Patchwork Of US AI Regulations.
This blog was based on this article published in JD Supra.