AI governance is evolving rapidly. Here’s how government agencies should prepare.
The global AI governance environment is complex and rapidly evolving. While key topics and concerns are emerging, government agencies must stay ahead of the curve by evaluating agency-specific priorities and processes. Ensuring compliance with formal policies through audit tools and other measures is only the final step. The foundation for effective governance is human-centric and includes securing funding authority, identifying accountable leaders, developing AI capabilities and centers of excellence across the institution, and integrating insights from academia, nonprofits, and private industry.
Global governance environment
As of this writing, the OECD Policy Observatory lists 668 national AI governance initiatives in 69 countries, regions and the EU. This includes national strategies, agendas and plans. AI coordinating or monitoring body Public consultation of stakeholders or experts; Initiatives to leverage AI in the public sector. Moreover, the OECD categorizes legally enforceable AI regulations and standards into a separate category from the previously mentioned initiatives, listing an additional 337 initiatives.
Terms rule It can be slippery. In the context of AI, this could mean safety and ethics guardrails for AI tools and systems, policies around data access and model use, or government-mandated regulations themselves. We therefore see that national and international guidance addresses these overlapping and intersecting definitions in different ways.
Common tasks, common topics
Broadly speaking, government agencies strive for governance that supports and balances societal concerns about economic prosperity, national security, and political dynamics. Private companies prioritize economic prosperity with a focus on efficiency and productivity that drive business success and shareholder value. However, there are growing concerns that corporate governance is not considering the best interests of society as a whole and is considering important safeguards as an afterthought.
Non-governmental organizations also issue guidance that may be useful to public sector organizations. This year, the World Economic Forum’s AI Governance Alliance released the Presidio AI Framework (PDF). “…offers a structured approach to the secure development, deployment, and use of generative AI. In doing so, the framework highlights gaps and opportunities in addressing safety issues from the perspective of four key actors: AI model creators, AI model adapters, AI model users, and AI application users.”
An academic and scientific perspective is also essential. In their overview of catastrophic AI risks, the authors identify several mitigations that can be addressed through governance and regulation (in addition to cybersecurity). They identify international coordination and safety regulations as important to prevent risks associated with “AI competition.”
Several common regulatory themes are emerging across industries and sectors. For example, it is increasingly recommended to provide end users with transparency about the existence and use of interactive AI. Leaders must ensure a viable commitment to social responsibility as well as the credibility of performance and resistance to attack. This includes prioritizing fairness and lack of bias in training data and results, minimizing environmental impact, and increasing accountability through responsible person designation and organization-wide training.
Policy alone is not enough
Whether a governance policy relies on soft law or formal enforcement, no matter how comprehensively, precisely, or wisely it is written, it is only a principle. What matters is how the organization puts this into practice. For example, New York City released its own AI Principles in October 2023 and formalized its AI Principles in March 2024. These principles are consistent with the themes above, including that AI tools “must be tested before deployment,” but while AI-based chatbots are designed to answer questions about starting and running a business, the city believes the answers will encourage users to break the law. provided. Where did the implementation stop?
Governance requires a human-centered, accountable and participatory approach. Let’s look at three key actions agencies should take.
1. Hold accountable leaders and fund their missions.
Trust cannot exist without accountability. To operate a governance framework, government agencies need accountable leaders who provide funding to get the work done. To name just one knowledge gap, several senior technology leaders we interviewed had no understanding of how data can be biased. Data is a product of human experience, and it is prone to calcifying worldviews and inequalities. AI can be seen as a mirror that reflects our biases back to us. It is essential that we understand this and identify responsible leaders who are financially empowered and accountable to ensure that AI operates ethically and is consistent with the values of the communities it serves.
2. Providing applied governance training
We observe many organizations holding AI “innovation days” and hackathons aimed at improving operational efficiency (e.g., cost savings, citizen or employee engagement, and other KPIs). We recommend expanding the scope of these hackathons to address AI governance issues through the following steps:
- Level 1: Three months before the pilot is announced, have the candidate governance leader host a keynote address on AI ethics to hackathon participants.
- Step 2: Let the government agencies that set policy act as judges of the event. How to judge a pilot project, including AI governance artifacts (documented output), including fact sheets, audit reports, effect hierarchy analysis (intended, unintended, primary and secondary impacts), and functional and non-functional requirements of the model. Provides standards for. In operation.
- Step 3: We provide applied training on the development of these artifacts through workshops on your team’s specific use cases over the course of 6-8 weeks leading up to the presentation date. Strengthen your development team by inviting cross-disciplinary teams to workshops that assess ethics and model risk.
- Step 4: On the day of the event, have each team present their work in a holistic manner and demonstrate how they assess and mitigate the various risks associated with their use cases. Judges with domain expertise, DEI, regulatory, and cybersecurity backgrounds must question and evaluate each team’s work.
These timelines are based on our experience providing applied training to practitioners around very specific use cases. This gives prospective leaders the opportunity to: actual work While receiving guidance from a coach, they also place team members in the role of discerning governance judges.
But hackathons alone are not enough. You can’t learn everything in 3 months. Institutions must invest in building an AI literacy training culture that promotes continuous learning, including discarding old assumptions where necessary.
3. Inventory assessment beyond algorithmic impact assessment
Many organizations developing large AI models use algorithmic impact assessment forms as their primary mechanism to collect critical metadata about their inventory and assess and mitigate risks before deploying AI models. These forms only survey the AI model owner or procurer about the purpose of the AI model, training data and approach, responsible parties, and concerns about different impacts.
There are many concerns about these modalities being used alone without rigorous training, communication, and cultural consideration. These include:
- incentive: Are individuals encouraged or disenfranchised to fill out these forms carefully? Most of us lose motivation Because there are quotas that need to be met.
- Responsibility for Risk: These forms may imply that risk is waived because the model owner used a particular technology or cloud host, or procured the model from a third party.
- AI-related definitions: Model owners may not realize that what they are procuring or deploying actually meets the definition of AI or intelligent automation outlined in the regulations.
- Ignorance of different influences: It could be argued that by placing the responsibility of completing and submitting the algorithm evaluation form on one person, an accurate assessment of the different impacts is omitted. by design.
We looked at form inputs from AI practitioners across a variety of geographies and education levels, as well as form inputs from people who said they had read the published policy and understood its principles. These include “How can my AI model be unfair if I don’t collect PII?” and “I have the best intentions, so there is no risk of disparate impact.” This represents an urgent need for applied education and an organizational culture that continually measures exemplary behavior according to clearly defined ethical guidelines.
Creating a culture of responsibility and collaboration
A participatory and inclusive culture is essential as organizations struggle to manage technologies with far-reaching impacts. As previously discussed, diversity is a mathematical factor, not a political one. Multidisciplinary centers of excellence are essential to ensure that all employees are trained to understand the risks and various impacts and become responsible users of AI. Organizations must make governance an integral part of collaborative innovation efforts, emphasizing that responsibility lies with everyone, not just model owners. They must bring a socio-technical perspective to governance issues and identify who are the leaders who are truly accountable. New approaches are welcome Mitigate AI risks regardless of their source: government, non-government, or academia.
Learn how IBM Consulting can help your organization operate responsible AI governance.
For more information on this topic, read this summary of a recent IBM Government Business Center roundtable with government leaders and stakeholders on how responsible use of artificial intelligence can benefit the public by improving agency service delivery.
Was this article helpful?
yesno