As artificial intelligence (AI) becomes a key driver of economic growth, it presents both opportunities and challenges. To ensure AI benefits society, employees, customer and organizations, it’s crucial to prevent a scenario where the concentration of power in the tech sector leads to a new form of digital dominance.
Modern Echoes Of The East India Company
Today’s tech giants – like Google, Facebook, and Amazon – wield power in ways that echo the monopolistic practices of past corporate entities such as the infamous East India Company (EIC). While comparisons with historic militarized corporations may seem stark, it serves as a reminder of the potential consequences of unchecked corporate power.
Historian William Dalrymple aptly noted the dangers of a powerful, unregulated company operating without sufficient oversight, a situation that resonates in today’s context. The UN Advisory Body on AI echoes this in Governing AI for Humanity, stating ‘technology cannot be left to the whims of the market’ and such challenges require a ‘holistic, global approach’.
While the EIC’s control influence over India lasted only 75 years, its legacy as a ruthless capitalist force offers harsh lessons for the 21st century. Lessons particularly relevant as public sector leaders shape the future of AI and digital governance.
Economic Dominance
Like the EIC, today’s tech giants started with niche markets but have expanded to dominate global digital economies. Google has monopolized around 92% of the search engine market, while Amazon’s e-commerce dominance reshapes retail landscapes. AI amplifies this power, optimizing operations and targeting consumers with unprecedented precision. For instance, OpenAI’s ChatGPT became the fastest adopted consumer application in history just two months after its launch. Even when they stumble, these giants’ vast influence makes them difficult to challenge.
Social Disruption
Modern tech giants, enhanced by AI, have disrupted social dynamics through their platforms. Facebook’s algorithms influence online interactions and can contribute to the spread of misinformation. Google’s search monopoly shapes access to information, subtly influencing public opinion and knowledge. While messaging platforms like Telegram and Signal depended on by millions for privacy also provide a haven for illegal activities. Despite 49% of US adults from Forrester’s Global Government, Society And Trust Survey, 2024, expressing distrust in AI-generated information, AI-driven content delivery can easily manipulate and reinforce biases, particularly given the lack of transparency surrounding its use.
Public Harm
Whilst the EIC’s exploitative practices led to significant global trauma, today’s tech giants also face criticism for contributing to new forms of public harm. Social media platforms are increasingly linked to mental health issues, cyberbullying, and the spread of harmful content. These platforms also provide new arenas for criminal activity, with France preliminarily charging Telegram CEO Pavel Durov as being accountable for such activity on his site. Despite the same platform lauded for its role in supporting Ukraine’s defense against Russian aggression. The U.S. Surgeon General has called for a warning label on social media platforms, and other countries, including Australia, are considering or implementing age-based limits and identity verification measures.. AI has potential to exacerbate these issues, a concern that 54% of US online adults agree with. And when 45% of US adults say they don’t trust big tech organizations to manage the potential risks of AI, it adds urgency to the need for transparency and accountability in its deployment.
Regulatory Challenges
Historically governments have struggled to regulate powerful corporations, and today the digital age presents similar challenges. Tech giants operate across borders, often bound by domestic regulations that do not adequately address their global impact. Whilst efforts like the European Union’s AI Act, are steps in the right direction, enforcement remains a challenge. With 52% of online US adults agreeing that AI poses a serious threat to society, effective oversight is critical to preventing the abuses that can arise with in unregulated markets.
Preventing The Pitfalls Of Digital Overreach
While the terms “Digital imperialism” might seem extreme it captures the extensive control and influence tech giants exert over global markets, culture, and politics. They relentlessly harvest user data, often without clear consent, raising privacy and ethical concerns. Their influence on public opinion, through control over information and advertising, parallels historical instances of corporate overreach. AI‘s intensifies these issues, making regulatory intervention even more important.
To avoid the mistakes of the past and ensure that the digital future is equitable and fair, mission leaders in the public sector globally must consider the following actions:
-
-
-
- Strengthen antitrust laws. Reinforce antitrust regulations to prevent monopolistic practices and encourage competition. The European Union’s actions against anti-competitive practices serve as an example on how to promote fair competition in the digital markets.
- Enhance data privacy regulations. Implement comprehensive data privacy laws to protect consumer information, akin to the GDPR in Europe. The GDPR ensures consumers have control over their personal data and holds companies accountable for data misuse.
- Promote transparency and accountability. Ensure tech companies disclose their operations and algorithms, promoting transparency. The California Consumer Privacy Act mandates that companies provide clear information on data collection practices and offer consumers the right to opt out of data sales.
- Encourage international cooperation. Develop consistent global standards and policies that transcend national borders. The Cross-Border Privacy Rules System, led by the Asia-Pacific Economic Cooperation, facilitates international cooperation on privacy standards. Australia’s eSafety Commissioner and the EU CNECT have also signed the first ever “Digital Alliance”, to jointly enforce their respective online safety acts.
- Safeguard public interest. Establish independent oversight bodies to monitor the societal impacts of tech giants and align their actions with the public good. In Australia, the Competition and Consumer Commission has been active in regulating tech giants, such as the News Media Bargaining Code. A model adopted also by Canada for which it suffered the same punitive measures from Facebook as Australia did.
- Protect human rights. Commit to protection from adverse impacts of AI not only on human rights, but also on the public institutions upon which society depends. Efforts are already underway with the US signing the 46-member Council of Europe’s Framework Convention for Artificial Intelligence and Human Rights, Democracy, and the Rule of Law. The endorsement of the US and 11 other non-member nations is the first step towards a global AI treaty. If established this would provide a legally enforceable basis for discrimination resulting from AI use.
-
-
Learning From The Past To Safeguard The Future
The comparison between historical corporate overreach and modern tech giants is not meant to be a direct analogy but rather a cautionary tale. By learning from historical precedents and implementing detailed, cooperative regulatory measures, we can better manage the influence of today’s digital behemoths. This approach is crucial to prevent the negative consequences of digital imperialism and to foster a healthier, more equitable digital landscape, providing everyone with equal access to the AI Advantage.