“Please be advised that your call may be recorded for quality assurance and training purposes.”
We’ve all heard this message at some point — a familiar prelude to countless conversations worldwide.
This disclosure has remained largely unchanged over the years but may no longer serve any purpose. The bar for what’s considered legally acceptable consent is rising, and increasingly privacy-savvy consumers aren’t satisfied with passive “I didn’t object, so I guess I consented” assumptions.
So What’s Going On?
Consumers are filing an increasing number of lawsuits against both brands (e.g., Patagonia, The Home Depot) and vendors (e.g., Google, Salesforce, Nuance, AWS) alleging that these parties have violated the California Invasion of Privacy Act (CIPA). The two angles at play in these allegations:
- Insufficient consent. Plaintiffs argue that brands have failed to adequately obtain consent for recording and analyzing customer calls and disclosing data usage, especially regarding sharing data with third parties.
- Unauthorized use. On the vendor side, allegations point to customer data usage beyond agreed terms, specifically for training their AI models without clear consent. This could classify them as “interceptors” under the CIPA.
Why Now?
On The Technology Side …
While it’s true that brands have been recording contact center conversations for a long time for “quality assurance and training purposes,” the landscape has shifted significantly with the introduction of advanced AI technologies. Today, these recordings are not just reviewed by human ears but also analyzed by sophisticated AI-powered solutions in real time to extract detailed customer insights that go far beyond traditional quality checks. The typical disclosures do not give enough context for what exactly customers are being asked to consent to.
On The Consumer Side …
In lieu of federal privacy regulation, US consumers — and eager law firms — are turning to class-action lawsuits with increasing frequency when they feel that their data is being collected, used, and/or shared without their permission. To date, these privacy-oriented class-action lawsuits have focused on marketing missteps, such as dropping advertising pixels where they don’t belong, but this recent string of lawsuits should put contact center professionals on alert, too. Consumers are increasingly aware of the data economy — 74% of US online adults are aware that their information and activities are collected by websites and apps they use — and many are doing what they can to wrest back some control. For example, 92% of US online adults use at least one tool to protect their online privacy and security. And privacy advocates have scored some wins, such as a settlement with Google that required it to change the disclosures in Chrome’s incognito mode and delete inappropriately collected data.
What Should Enterprise Leaders Do?
Well, what they shouldn’t do is close their eyes and hope that they aren’t next. But this also shouldn’t be a reason for enterprise leaders to shy away from leveraging AI to better understand why customers are calling. In light of these concerns, leaders should:
- Update contact center disclosures. While the current approach has had a good run for the past couple decades, enterprises must ensure that any disclosures transparently communicate AI’s involvement in call analysis so that customers are clearly informed at the point of interaction.
- Direct privacy policies to be clear and written in plain language. Your privacy policy should articulate the specifics of data collection, processing, and sharing. Customers are more likely to trust brands that communicate clearly, so ensure that your privacy policy is written in plain language that takes into consideration customer interest and expertise. Enterprises should invest in making privacy and security information as intuitive and user-friendly as any of their customer-facing digital experiences, which means doing usability testing on the policy with customers.
- Review and update vendor contracts. Many of these lawsuits allege that vendors are using conversation data without permission for their own AI model training, going beyond the scope of services they deliver to their enterprise clients. To prevent being caught on the back foot, contact center leaders should proactively collaborate with legal and contract management teams to understand what current contracts allow and review new contracts in alignment with enterprise AI policies.