Blockchain

Balancing AI: Do Good and Avoid Harm

Growing up, my father always told me to “do good.” As a child I thought it was poor grammar, and he would correct me, insisting, “Do it better.” Even my kids make fun of me when they hear his “do good” advice and I will admit that I gave him a pass on the grammar section.

For responsible artificial intelligence (AI), organizations must prioritize the ability to prevent harm as a central focus. Some organizations may aim to use AI for “good.” But sometimes AI requires clear guardrails before agreeing to what is “good.”

Read the “Presidio AI Framework” article to learn how to use guardrails to address generative AI risk across the extended AI lifecycle.

As generative AI continues to become mainstream, organizations are excited about its potential to transform processes, reduce costs, and increase business value. Business leaders want to redesign their business strategies to better serve customers, patients, employees, partners, or citizens and improve the overall experience. Generative AI is opening doors and creating new opportunities and risks for organizations around the world, and human resources (HR) leadership plays a key role in managing these challenges.

Adapting to the implications of increased AI adoption may involve complying with complex regulatory requirements such as NIST, EU AI Act, NYC 144, US EEOC, and White House AI Act. This has a direct impact on society and the profession, as well as HR and organizational policies. Technical and collective bargaining labor agreements. Adopting responsible AI requires a multi-stakeholder strategy identified by leading international resources, including NIST, OECD, Responsible Artificial Intelligence Institute, Data and Trust Alliance, and IEEE.

This is not just an IT role. HR plays an important role

HR leaders are now considering AI and other technologies to advise companies on the skills needed for today’s work and what they will be tomorrow. According to the WEF, employers estimate that 44% of workers’ skills will be disrupted in the next five years. HR professionals are increasingly exploring the potential to improve productivity by empowering employees and helping them focus on higher-level tasks. As AI capabilities expand, there are ethical concerns and questions every business leader must consider to ensure that AI use does not come at the expense of employees, partners, or customers.

Learn about the principles of trust and transparency IBM recommends for organizations to responsibly integrate AI into their operations..

Worker training and knowledge management are now tightly coordinated as a multi-stakeholder strategy with IT, legal, compliance and business operators, rather than just checking a box once a year. Therefore, HR leaders must be intrinsically involved in setting policies and developing programs to grow employees’ AI acumen, identifying where to apply AI capabilities, establishing responsible AI governance strategies, and deploying tools such as AI and automation. Be sure to use it thoughtfully and respectfully. to employees through the introduction of reliable and transparent AI.

Challenges and solutions to introducing AI ethics in organizations

Although AI adoption and use cases continue to expand, organizations may not be fully prepared for the many considerations and consequences of adopting AI capabilities into their processes and systems. According to an IBM Institute for Business Value study, 79% of executives surveyed emphasized the importance of AI ethics in their enterprise AI approach, but less than 25% implemented common principles of AI ethics.

This discrepancy exists because policy alone cannot eliminate the growing diffusion and use of digital tools. The increasing number of workers using smart devices and apps such as ChatGPT or other black box disclosure models without proper authorization has become an ongoing problem and does not include proper change management to inform workers of the associated risks.

For example, employees can use these tools to email customers with sensitive customer data, and managers can use them to write performance reviews that reveal private employee data.

To help reduce these risks, it may be useful to include a responsible AI practice focal point or advocate within each department, business unit, and functional level. This case can be an opportunity for HR to drive forward and advocate for efforts to prevent potential ethical issues and operational risks.

Ultimately, it is essential to establish and communicate a responsible AI strategy to all employees, with common values ​​and principles that align with the company’s broader values ​​and business strategy. This strategy should advocate for employees and identify opportunities for the organization to embrace AI and innovation that drives business goals. We must also support training to help employees prevent harmful AI impacts, address misinformation and bias, and promote responsible AI internally and within society.

3 Considerations for Responsible AI Adoption

Here are three key considerations business and HR leaders should keep in mind as they develop a responsible AI strategy:

Put people at the center of your strategy

In other words, prioritize your workforce when planning your advanced technology strategy. This means understanding how AI works with employees, concretely communicating how AI can help them perform their roles, and redefining the way they work. Without training, employees may become overly concerned about AI being deployed to replace them or eliminate their workforce. Communicate honestly and directly with your employees about how these models are built. HR leaders must address not only potential job changes, but also the reality of new categories and jobs created by AI and other technologies.

Activating governance that takes into account both the adopted technology and the company

AI is not a monolith. Organizations can deploy it in a variety of ways, so they need to clearly define what responsible AI means to them, how they plan to use it, and how they won’t use it. Principles such as transparency, trust, equity, fairness, robustness and the use of diverse teams must be considered and designed within each AI use case, whether or not it involves generative AI, according to OECD or RAII guidelines. Additionally, for each model, you should conduct routine reviews of model drift and privacy measures, as well as specific diversity, equity, and inclusion metrics to mitigate bias.

    Identify and align the right skills and tools for the job.

    The reality is that some employees are already experimenting with generative AI tools to help them answer questions, draft emails, and perform other routine tasks. Therefore, organizations must immediately communicate plans to use these tools, set expectations for employees who use the tools, and ensure that use of these tools is consistent with the organization’s values ​​and ethics. Organizations should also provide skills development opportunities for employees to improve their AI knowledge and understand potential career paths.

    For detailed guidance on how to responsibly adopt AI in your organization, download the white paper, “Creating Value from Generative AI.”

    Responsible implementation and integration of AI into organizations is essential for successful AI adoption. Together with our customers and partners, IBM has made responsible AI the center of our approach to AI. In 2018, IBM established the AI ​​Ethics Committee as a central, multi-disciplinary body to support an ethical, responsible, and trustworthy AI culture. It is comprised of senior leaders from a variety of departments, including Research, Business Units, Human Resources, Diversity and Inclusion, Legal, Government and Regulatory Affairs, and Procurement and Communications. The Board of Directors directs and implements AI-related initiatives and decisions. IBM takes the benefits and challenges of AI seriously and holds us accountable for everything we do.

      I will grant my father this one broken grammar rule. AI can “do good” if managed properly, with the involvement of many humans, guardrails, oversight, governance, and AI ethics frameworks.

      Watch our webinar on how to prepare your business for responsible AI adoption. Learn how IBM helps clients on their talent transformation journey.

      Was this article helpful?

      yesno

Related Articles

Back to top button