The US Federal Trade Commission (FTC) is officially investigating OpenAI, creators of ChatGPT, sending a letter with over 100 questions and a request for supporting documentation related to OpenAI’s business model and the reliability and responsibility of its large language models.
The investigation and letter offer enterprises a definitive set of areas for evaluating their processes and assembling the supporting documentation to assure that their AI is trusted and that they are using AI best practices. AI leaders, CIOs, and chief data officers can mitigate risk exposure by addressing eight areas of interest in the FTC request:
- AI literacy. Document and maintain disclosures and representations related to generative AI use. Trustworthiness, transparency, and knowledge of how genAI performs relative to expectations is crucial. Advance AI literacy among business decision-makers to ensure that they are knowledgeable about genAI and can appropriately gauge risks, benefits, and compliance needs.
- Data science best practices. Develop standards for model development and training. Training methods and techniques, data provenance and lineage, risk assessments, and stakeholder input to tune models will all be evaluated by regulators. The OECD guides organizations on AI best practices.
- Regulatory compliance. Unlike in the EU, US federal regulatory guardrails on AI products and services are in very early stages. That doesn’t mean that genAI is compliance-free, however. Track AI legislation at the state level to understand new requirements and keep a pulse on proposed legislation.
- Risk management. GenAI is still an emerging technology that requires a slightly different risk management process than more established tech. A governance framework to ensure accountability, along with detailed policies and procedures, dedicated ownership of risks and controls, and ongoing monitoring, is critical. Consistently assess and address the impact (upside and downside) of genAI across multiple risk domains, factoring for systemic risks.
- Third-party risk management. GenAI risk must be part of your due-diligence process for third-party relationships. How your third parties collect, retain, and utilize customer information acquired through API integrations or plug-ins is your responsibility. Consider what data and systems third parties can access and whether their security and risk maturity align with yours, and don’t ignore processes to offboard third parties.
- Incident management. Maintain a running list of incidents, issues, and risk events from policy violations (including genAI “hallucinations”) along with documentation of remediation plans. Have clear guidance on risk treatment protocols and maintain an audit trail for issues, incidents, and approvals for risk acceptance.
- Product lifecycle. Maintain a library and governance policies for API integrations and plug-ins. GenAI spans a portfolio of components maintained in developer and data engineering libraries, data science notebooks and feature stores, and data governance catalogs. AI governance practices must deploy standards, policies, and workflows to manage optimization, changes, and versioning across business, operational, and development stages.
- Data governance. Shore up data governance across data, data science, and genAI applications. With the ability for any employee to build or use genAI, protecting data and federating data governance is critical. Data governance must address data integrity, use rights/consent to the data, and data protection. Of note, data scientists, subject matter experts, and leaders were all named by the FTC as responsible for data governance.
US legislators (federal and state) are still early in their process of drafting AI protection laws, but this news should serve as a catalyst for enterprises to accelerate AI governance and assess genAI use cases against existing regulations. Launch your AI governance program and apply tested risk management approaches. Find out more by reading about how to Get AI Governance Just Right and how to Proactively Manage Risk With The Forrester ERM Success Cycle.