The European Union Artificial Intelligence Act is here. It’s intended to regulate a matter of unprecedented complexity: ensuring that firms use AI in a safe, trustworthy, and human-centric manner. A rapid enforcement schedule and hefty fines for noncompliance mean that every company that deals with any form of AI should make it a priority to understand this landmark legislation. At the highest level, the EU AI Act:
- Has strong extraterritorial reach. Much like the GDPR, the EU AI Act applies to private and public entities that operate in the EU as well as those that supply AI systems or general-purpose AI (GPAI) models to the EU, regardless of where they’re headquartered.
- Applies differently to different AI actors. The EU AI Act establishes different obligations for actors across the AI value chain. It establishes roles like GPAI model providers, deployers (i.e. users), manufacturers, and importers.
- Embraces a pyramid-structured risk-based approach. The higher the risk of the use case, the more requirements it must comply with and the stricter the enforcement of those requirements will be. As the level of risk associated with use cases decreases, so does the number and complexity of the requirements your company must follow.
- Includes fines with teeth. Not all violations are created equal — and neither are the fines. Noncompliance with the Act’s requirements can cost large organizations up to €15 million or 3% of global turnover. Fines for violating the requirements of prohibited use cases are even higher: up to €35 million or 7% of global turnover.
Treat The Act As The Foundation, Not The Ceiling
If we expect customers and employees to actually use the AI experiences we build, we have to create the right conditions to engender trust. It’s easy to think of trust as a nebulous thing, but we can define trust in a more tangible, actionable way. Trust is:
The confidence in the high probability that a person or organization will spark a specific positive outcome in a relationship.
We’ve identified seven levers of trust, from accountability and consistency to empathy and transparency.
The EU AI Act leans heavily into the development of trustworthy AI, and the 2019 Ethics Guidelines for Trustworthy AI lay out a solid set of principles to follow. Together, they build a framework for the creation of trustworthy AI on a familiar set of principles, like human agency and oversight, transparency, and accountability.
But legislation is a minimum standard, not a best practice. Building trust with consumers and users will be key to the development of AI experiences. For firms operating within the EU, and even those outside, following the risk categorization and governance recommendations that the EU AI Act lays out is a robust, risk-oriented approach that, at a minimum, will help create safe, trustworthy, and human-centric AI experiences that cause no harm, avoid costly or embarrassing missteps, and, ideally, drive efficiency and differentiation.
Get Started Now
There’s a lot to do, but at a minimum:
- Build an AI compliance task force. AI compliance starts with people. Regardless of what you call it — AI committee, AI council, AI task force, or simply AI team — create a multidisciplinary team to guide your firm along the compliance journey. Look to firms such as Vodafone for inspiration.
- Choose your role in the AI value chain for each AI system and GPAI model. Is your firm a provider, a product manufacturer embedding AI in its products, or a deployer (i.e., user) of AI systems? In a perfect world, matching requirements to your firm’s specific role would be a straightforward exercise — but in practice, it’s complex.
- Develop a risk-based methodology and taxonomy for AI system and risk classification. The EU AI Act is a natural starting point as far as compliance is concerned, but consider going beyond the Act and applying the AI NIST Risk Management Framework and the new ISO 42001 standard.
Read our latest report to learn more about how to approach the act, or for help, book a guidance session.