How to Help Manage the Risks of Generative AI in the Enterprise

What are the risks enterprises face when adopting generative AI?

Many organizations are racing to deploy generative artificial intelligence (AI) products, as they look for ways to leverage the hot technology.

While generative AI is revolutionizing how people create, interact with, and consume digital content—and the advent of large language models (LLMs) such as Generative Pre-Trained Transformer (GPT) has increased the capabilities of generative AI—the technology also presents security risks for organizations and users.

Security teams need to adapt to how their enterprises plan to use generative AI, or they will find themselves unprepared to defend it, according to a July 2023 report, Securing Generative AI, from Forrester Research.

Attackers have already begun exploiting the rising popularity of generative AI-based models to their advantage, according to a new report by Rezilion. AI introduces new threats that didn’t exist before, which require attention and awareness, the report says.

Here are some practices for enhancing the security of generative AI.

Build Robust Security Practices

While organizations might be tempted to rush into integrating generative AI and LLMs into their workflows and applications, doing so without assessing the security concerns and creating strong security practices invites trouble.

Security teams need to conduct comprehensive risk assessments and enforce adherence to strong security practices throughout the software development life cycle (SDLC), according to the Rezilion report. By focusing on security risks, they can make more informed decisions about how to adopt generative AI while also upholding the highest standards of scrutiny and protection.

Deploy the Right Tools and Frameworks

Organizations need to have the right products deployed to defend against AI-related threats. Tools such as the OpenSSF Scorecard, for example, can help teams make more informed decisions by taking into account the security aspects and not just functional considerations.

When building generative AI-based systems it’s a good idea to adopt a secure-by-design approach, according to the Rezilion report. Companies should look for platforms that automatically detect, prioritize and eliminate software supply chain risk. These tools enable security teams to view all of an organization’s software, across development, production, in the cloud, on-premises and in different operating environments.

Educate Users About the Risks of Generative AI

With many enterprises adopting generative AI, it is imperative that security teams get up to speed quickly on the technology, the Forrester report says.

Security practitioners need to be trained to understand prompt engineering and prompt injection attacks, the report says.

The Rezilion report notes that organizations need to educate users and stakeholders about the constraints and limitations of LLM-generated content, specifically concerning reliability and accuracy. This allows users to interact with LLMs with a more discerning mindset, it says.

It’s also vital to promote oversight to ensure accuracy and impartiality in LLM-generated content, and to establish user verification processes through alternative sources, the Rezilion report says. This can foster a comprehensive decision-making process.

To learn more about managing the risks of AI, download our white paper today.

 

Reduce your patching efforts by
85% or more in less than 10 minutes