Blockchain

Business’s Best Choice for the Future: Generative AI Security

IBM and AWS Study: Less than 25% of generative AI projects are currently protected.

The corporate world has long operated on the notion that trust is the currency of good business. But as AI transforms and redefines how businesses operate and how customers interact with them, trust in the technology must be built.

Advances in AI have freed up human capital to focus on high-value outcomes. These advancements will have a transformative impact on business growth, but user and customer experiences depend on an organization’s commitment to building secure, responsible, and trustworthy technology solutions.

Businesses must determine whether they can trust the generative AI that interfaces with their users, and security is a fundamental component of trust. Therefore, one of the biggest bets enterprises face is securing their AI deployments.

Innovate Now, Protect Later: Disconnect

Today, the IBM® Institute for Business Value published Generative AI Security: Today’s Critical Research, co-authored by IBM and AWS, introducing new data, practices, and recommendations for securing generative AI deployments. According to IBM research, 82% of C-suite respondents say secure and trustworthy AI is essential to business success. While this seems promising, 69% of leaders surveyed indicated that innovation takes precedence over security when it comes to generative AI.

Prioritizing innovation and security may seem like a choice, but it’s actually a test. There is a clear tension here. Organizations recognize that the risks of generative AI are higher than ever, but they are not applying them. lesson learned Occurs due to disruption of previous technology. Like the move to hybrid cloud, agile software development, or zero trust, generative AI security can be an afterthought. More than 50% of respondents are concerned about unpredictable risks impacting generative AI initiatives, which they fear will increase the likelihood of business disruption. However, they report that only 24% of generative AI projects are currently protected. Why is there such a disconnect?

Indecisiveness about security may be both an indicator and a consequence of a broader generative AI knowledge gap. Nearly half of respondents (47%) said they were uncertain about where and how much to invest in generative AI. Even as teams test new features, leaders continue to research which generative AI use cases are best suited and how to scale for production environments.

Generative AI security starts with governance

Not knowing where to start can also hinder security measures. That’s why IBM and AWS have joined forces to uncover action guidance and practical recommendations for organizations seeking to secure AI.

To build trust and security in generative AI, organizations must start from the basics with governance. In fact, 81% of respondents said generative AI requires a fundamentally new security governance model. Starting with governance, risk, and compliance (GRC), leaders can build the foundation of a cybersecurity strategy to protect AI architecture that aligns with business goals and brand values.

To secure any process, you first need to understand how that process should work and what the expected process should look like so that you can identify deviations. AI that deviates from its operational design purpose can create new risks with unexpected business impacts. Therefore, identifying and understanding these potential risks helps organizations understand their own risk thresholds based on their unique compliance and regulatory requirements.

Once governance guardrails are in place, organizations can more effectively formulate a strategy for securing their AI pipeline. It is the foundational infrastructure for building and embedding data, models and their use, as well as AI innovation. The shared responsibility model for security may change depending on how the organization uses generative AI. As organizations develop their own AI operations, a variety of tools, controls, and processes are available to help mitigate the risk of business impact.

Organizations must also recognize that while hallucinations, ethics, and bias are often the first things that come to mind when thinking of trustworthy AI, their AI pipelines also face a threat environment that includes: Trust yourself at your own risk. Old threats take on new meaning, new threats use aggressive AI capabilities as new attack vectors, and new threats seek to compromise the AI ​​assets and services we increasingly rely on.

Trust-Security Equation

Security can help instill trust and confidence in generative AI use cases. To achieve these synergies, organizations must: village approach. The conversation must extend beyond IS and IT stakeholders to strategy, product development, risk, supply chain, and customer engagement.

Because these technologies are innovative and disruptive, managing an organization’s AI and generative AI assets requires collaboration across security, technology, and business domains.

Technology partners can play an important role. Leveraging the breadth and depth of expertise of a technology partner across the threat lifecycle and security ecosystem can be a valuable asset. In fact, an IBM study found that more than 90% of organizations surveyed support generative AI security solutions through third-party products or technology partners. When choosing a technology partner for their generative AI security needs, surveyed organizations reported:

  • 76% are looking for a partner who can help them build a compelling cost case with a solid ROI.
  • 58% seek guidance on overall strategy and roadmap.
  • 76% are looking for partners who can facilitate training, knowledge sharing, and knowledge transfer.
  • 75% choose a partner who can guide them through the evolving legal and compliance environment.

The study found that while organizations recognize the importance of security in the AI ​​revolution, they are still trying to understand how to best approach the AI ​​revolution. Building relationships that can guide, advise, and technically support these efforts is a critical next step toward protected, trustworthy generative AI. In addition to sharing key insights into leadership perceptions and priorities, IBM and AWS have included an implementation guide with practical recommendations for taking your generative AI security strategy to the next level.

Learn more about joint IBM-AWS research and how organizations can secure their AI pipelines.

Was this article helpful?

yesno

Related Articles

Back to top button