Building trust in government by implementing responsible generative AI
A survey conducted by the IBM® Institute for Business Value (IBV) in late 2023 found that respondents believe government leaders often overestimate the public’s trust in them. We also found that while the public is still wary of new technologies such as artificial intelligence (AI), most people are in favor of governments adopting generative AI.
IBV surveyed more than 13,000 adults in nine countries, including the US, Canada, UK, Australia and Japan. All respondents had at least a basic understanding of AI and generative AI.
This survey was designed to gain an understanding of individuals’ perspectives on generative AI and its use by businesses and governments, as well as their expectations and intentions for using this technology in their work and personal lives. Respondents answered questions about their level of trust in government and their views on adopting and leveraging generative AI to deliver government services.
These findings reveal the complex nature of public trust in institutions and provide key insights for government decision-makers adopting generative AI globally.
Overestimation of public trust: Perceptual discrepancy.
Trust is one of the most important pillars of public institutions. According to Cristina Caballe Fuguet, global government leader at IBM Consulting, “Trust is central to a government’s ability to carry out its duties effectively. “From local representatives to the highest positions in national government, citizens’ trust in government depends on a variety of factors, including the delivery of public services.”
Trust is essential for governments to lead on important issues such as climate change, public health, and safely and ethically integrating emerging technologies into society. The current digital age requires more integrity, openness, trust, and security as key elements of trust building.
Another recent study from IBV, IBM Government Business Institute and the National Academy of Public Administration (NAPA) found that most government leaders understand that building trust requires focus and commitment to collaboration, transparency and execution capabilities. However, the most recent IBV research shows that trust in government is declining among its members.
Respondents reported the greatest decline in trust in federal and national governments since the pandemic began, with 39% saying they had very low or extremely low levels of trust in their country’s government structures, compared to 29% before the pandemic. I did. .
This contrasts with the perceptions of government leaders surveyed in the same study. They indicate that they are confident that they have built and effectively grown trust in their organizations among their members since the COVID-19 pandemic. This discrepancy in perceptions of trust suggests that government leaders need to better understand their constituents and find ways to reconcile their views of the performance of public sector institutions with how they are perceived by constituents.
The study also found that building trust in AI-based tools and citizen services will be a challenge for governments. Nearly half of respondents said they trusted traditional human-assisted services more, while only about one in five said they trusted AI-based services more.
Open and transparent AI implementation is the key to trust
This year, more than 60 countries and the EU (representing almost half of the world’s population) will vote to elect their representatives. Government leaders face numerous challenges, including ensuring that technology serves, rather than opposes, democratic principles, institutions, and society.
According to David Zaharchuck, director of thought leadership research at IBV, “Safely and ethically integrating AI into our society and the global economy will be one of the biggest challenges and opportunities for governments over the next 25 years.”
Most individuals surveyed indicate that they are concerned about the potential negative impacts of generative AI. This shows that most of the public is still interested in this technology and thinking about how organizations can design and deploy this technology in a trustworthy and responsible way while complying with stringent security and regulatory requirements. .
The IBV study found that people still have some concerns about the adoption of this new technology and the impact it may have on issues such as decision-making, privacy and data security, or job security.
Despite the overall lack of trust in government and emerging technologies, most individuals surveyed agree with the government’s use of generative AI for customer service and believe the government’s adoption rate of generative AI is adequate. Less than 30% of those surveyed believe the pace of adoption in the public and private sectors is too fast. Most believe it is right, some even think it is too slow.
When it comes to specific use cases for generative AI, survey respondents have mixed views on using generative AI for a variety of citizen services. However, most agree with governments using generative AI for customer service, tax and legal advisory services, and education purposes.
These results show that citizens recognize the value of AI and governments leveraging generative AI. However, trust is an issue. If citizens don’t trust their government now, they certainly won’t if they make mistakes as they adopt AI. By implementing generative AI in an open and transparent way, governments can build trust and capacity simultaneously.
According to Casey Wreth, global government industry leader at IBM Technology, “The future of generative AI in the public sector is promising, but the technology brings new complexities and risks that must be addressed proactively. Government leaders must implement AI governance to manage risk, support compliance programs, and most importantly, gain public trust in the widespread use of AI governance.”
Integrated AI governance helps ensure trustworthy AI.
“As adoption of generative AI continues to grow this year, it is important for citizens to use tools like watsonx.governance™ to access transparent and explainable AI workflows that illuminate the black box of content created using AI. In this way, governments can become responsible stewards of this groundbreaking technology,” says Wreth.
IBM watsonx™, an integrated AI, data and governance platform, implements five fundamentals that help ensure trustworthy AI: fairness, privacy, explainability, transparency and robustness.
This platform provides a seamless, efficient, and responsible approach to AI development across a variety of environments. More specifically, the recent launch of IBM watsonx.governance helps public sector teams automate and address these areas to direct, manage and monitor their organization’s AI activities.
In essence, this tool promotes government transparency by opening the black box of how and where AI models get information about their output, similar to the nutrition labeling feature. The tool also facilitates a clear process for organizations to proactively detect and mitigate risks while supporting compliance programs for internal AI policies and industry standards.
As the public sector continues to embrace AI and automation to solve challenges and improve efficiency, it is important to maintain trust and transparency in all AI solutions. Governments must effectively understand and manage the entire AI lifecycle, and leaders must be able to easily explain the data used to train and fine-tune models, as well as how the models reached their results. Proactively adopting responsible AI practices is an opportunity for all of us to improve, and for governments to drive transparency while leveraging AI for good.
‘Break the black box’ with AI governance
Was this article helpful?
yesno