AI will revolutionize the internal workings of the public sector and how it engages with its constituents in the next 5 to 10 years, just as open data and third-party apps did in the 2010s. In my new report, “The State Of AI In The Public Sector, 2023,” I reveal data showing that public sector leaders aim to adopt AI to improve data discovery and quality (36%), streamline business operations (35%), and increase automation both internally and externally (33% and 32%).
The US and UK governments are already taking ‘landmark’ actions on the safety of AI tools and testing, but policy announcements like these rarely evoke concrete action within government itself. So any government organizations that adopt AI must do so with the same, if not more, care than policies like these expect of the private sector. Sadly, 49% of public sector leaders indicated to us that AI governance is left primarily to IT – even when AI-enabled systems are built by the agency business units. Leaving AI oversight to IT alone poses serious challenges for any organization, contributing as it does to an abdication of responsibility and business level expertise. This ultimately weakens governance and increases both the likelihood and magnitude of non-technical risks. Ultimately, poor accountability for oublic-sector AI systems has the very real potential to erode public trust, a critical element to maintaining positive relations between governments and their constituents.
Despite Popular Belief, The Public Sector Is Not Content To Play Catch-up
Using third party solutions for AI adoption can help speed the implementation of AI tools, ensuring that public sector agencies aren’t drastically outpaced by the technological progress of their private sector counterparts. And existing mechanisms such as public-private partners along with more traditional co-creation and innovation models can help to crowd in investment, technology and willing participants. For example, the Dutch municipality of Amsterdam, with the support of local universities and the private sector, have led the way, with the development of the scan car. This parking permit enforcement car is based on an AI-software that was developed around the values of transparency and accountability, two of the seven levers of the Forrester Trust Model, demonstrating the importance of trust in AI. But it is also necessary to have a mid-term vision to develop in-house capabilities for areas of strategic interest, given the multitude of risks associated with AI and its governance. As my colleague J. P. Gownder rightly said in his latest AI report, “People, Not Technology Or Data, Are The Key To Succeeding With AI”.
Forrester can provide you with support and guidance in AI system categorization and use cases, to assist with better integration and the promotion of trust among your employees and customers by ensuring they have the understanding, skills, and ethics to successfully use all forms of AI.
Forrester clients: Let’s chat more about this phenomenon via a Forrester guidance session.