Blockchain

Starting point: Three IBM leaders provide guidance to the newly appointed Chief AI Officer.

According to LinkedIn, the number of Chief Artificial Intelligence Officers (CAIOs) has nearly tripled in the past five years. Companies across industries are realizing the need to integrate artificial intelligence (AI) into their core strategies to keep up. These AI leaders are responsible for developing a blueprint for AI adoption and oversight in both business and the federal government.

In response to the Biden administration’s recent executive orders and the rapid increase in AI adoption across sectors, the Office of Management and Budget (OMB) issued a memo on how federal agencies can seize the opportunities of AI while managing the risks.

Many federal agencies are appointing CAIOs to oversee the use of AI within their domains, promote responsible AI innovation, and consider the impact on citizens to address risks associated with AI, including generative AI. But how do these CAIOs balance innovation with regulatory action? How will they build trust?

Three IBM leaders provide insight into the significant opportunities and challenges facing new CAIOs in their first 90 days.

1. “Consider safety, inclusion, trust, and governance from the beginning.”

—Kush Varshney, IBM Research Fellow

The first 90 days as Chief AI Officer will be intense and fast-paced, but you must still slow down and not take shortcuts. Consider safety, inclusivity, trustworthiness, and governance from the beginning rather than leaving them behind. But don’t allow the caution and critical perspective of your inner social change agent to extinguish the optimism of your inner technologist. Remember, just because AI is here now doesn’t mean institutions are exempt from their existing responsibilities to people. When articulating problems, understanding data, and evaluating solutions, consider the most vulnerable among us.

Don’t be afraid to reimagine fairness – from simply distributing limited resources in an equitable way to finding ways to care for those most in need. Don’t be afraid to change the frame of responsibility, from simply compliance to technology management. Don’t be afraid to reframe transparency from simply documenting choices made after the fact to seeking public input proactively.

Like urban planning, AI is infrastructure. The choices you make now can impact future generations. Follow the 7th generation principles, but do not give in to long-term existential risk arguments at the expense of clear and present harm. Keep an eye on the harms we have faced for years through traditional machine learning modeling, and the new and amplified harms we are seeing through pre-trained baseline models. Choose a smaller model where cost and operation can be controlled. Test and innovate using a portfolio of projects. Reuse and strengthen solutions for common patterns that emerge. They must then deliver at scale through a multi-model platform approach.

2. “Creating trustworthy AI development.”

—Christina Montgomery, IBM Vice President and Chief Privacy and Trust Officer

To drive efficiency and innovation and build trust, all CAIOs should start by implementing an AI governance program that helps address the ethical, social and technical issues central to creating trustworthy AI development and deployment. .

Start by conducting an organizational maturity assessment against your institution’s benchmarks during the first 90 days. Review the framework and assessment tools to clearly indicate strengths and weaknesses that impact your ability to implement AI tools and address associated risks. This process can help identify problems or opportunities that AI solutions can solve.

In addition to technical requirements, institution-wide ethics and values ​​regarding the creation and use of AI must be documented and articulated, which will inform decisions about risk. These guidelines should address issues such as data privacy, bias, transparency, accountability, and safety.

IBM has developed the Trust and Transparency Principles and the “Ethics by Design” playbook to help you and your team operationalize these principles. As part of this process, establish accountability and oversight mechanisms to ensure that AI systems are used responsibly and ethically. This includes establishing clear accountability and oversight structures to ensure compliance with ethical guidelines, as well as monitoring and audit processes.

Next, you need to start adapting your agency’s existing governance structures to support AI. Quality AI requires quality data. Many of your existing programs and practices, including third-party risk management, procurement, enterprise architecture, legal, privacy, and information security, will already be redundant, driving efficiencies and leveraging the full capabilities of your agency team.

The December 1, 2024 deadline to incorporate minimal risk management practices for AI that impacts safety and impact rights or to stop using AI until compliance is achieved may come sooner than you think. In your first 90 days, leverage automated tools to streamline your processes and implement the strategies you need to create responsible AI solutions with the help of a trusted partner like IBM.

3. “Establish a company-wide approach.”

—Terry Halvorsen, IBM Vice President, Federal Client Development

IBM has been working with U.S. federal agencies to support AI development for more than a decade. This technology has enabled significant advancements in operational efficiency, productivity, and decision-making for many federal agencies. For example, AI has helped the Internal Revenue Service (IRS) speed up the processing of paper tax returns (delivering tax refunds to citizens) and the Department of Veterans Affairs (VA) reducing the time it takes to process veterans’ claims. Naval Fleet Command can better plan and balance its food supply while reducing associated supply chain risks.

Additionally, IBM has long recognized the potential risks of AI adoption and has advocated for strong governance and AI that is transparent, explainable, robust, fair and secure. To mitigate risks, simplify implementation, and capitalize on opportunities, every newly appointed CAIO must establish an enterprise-wide approach to data and a governance framework for AI adoption. Data accessibility, data volume, and data complexity are all areas that need to be understood and addressed. ‘Enterprise-wide’ means that the development and deployment of AI and data governance breaks out of the organizational silos of existing institutions. Engage stakeholders across the agency, not just industry partners. Measure results and learn from your agency’s efforts and those of your colleagues across government.

And finally, the old adage ‘begin with the end in mind’ still rings true today. IBM encourages CAIOs to follow a use-case-driven approach to AI. This means identifying the target outcomes and experiences you want to create and supporting specific AI technologies to use there (generative AI, traditional AI, etc.).

CAIO leads by example

Public leadership can set the tone for AI adoption across all sectors. The creation of the CAIO position is critical to the future of AI, allowing our government to model a responsible approach to AI adoption across business, government, and industry.

IBM has developed tools and strategies to help organizations adopt AI effectively and responsibly across a variety of environments. We stand ready to support these new CAIOs as they begin to build ethical and responsible AI implementations within their institutions.

Wondering what to prioritize in your AI journey?

Request an AI strategy briefing with IBM

Was this article helpful?

yesno

Related Articles

Back to top button