How to trust a GenAI agent: four key requirements

This commentary is from Okta, a sponsor of Fortune Brainstorm AI.

Shiven Ramji is president, customer identity cloud, at Okta.

A man is looking at a laptop
IT professionals need to ensure AI is used safely.
Getty Images

When ChatGPT debuted in late 2022, it took the world by storm. Users were stunned by the app’s ability to give helpful, human-like responses to their input. Overnight, generative AI (GenAI) went mainstream.

Two years later, the era of GenAI agents has arrived. These supportive sidekicks are increasingly capable of performing tasks and making decisions autonomously. Nearly every week, a new agent seems to hit the market, with recent debuts from Microsoft, Salesforce, ServiceNow, and Priceline. Gartner says these agents should see rapid adoption, likely making up a third of all GenAI interactions by 2028.

All of these raise the question: How can we ensure they are secure?

The benefits of AI-powered assistants are vast — they are purchasing stocks on our behalf, waiting in digital queues to buy concert tickets for us, qualifying sales leads for our businesses, and more. But actions like these require sensitive, valuable, and possibly even confidential information from businesses and consumers. To take advantage of such powerful technology while keeping our data and identities secure, we need to ask:

  • How can we know and control what data is being used by the large language models (LLMs) that are powering these agents?
  • How do we equip organizations and developers with the tools to feel comfortable and confident alongside GenAI agents?
  • How can we maintain privacy while using GenAI agents that require personal information about us?
  • How can we trust that the GenAI agent we’re using is operated by a trusted vendor?
  • When we interact with a GenAI agent that’s connected to other agents and apps, how can we trust that those connections are secure?
  • How can we require additional verification before an AI agent takes action on our behalf?

These are just some of the questions companies must address as they integrate GenAI agents into their products. They’ll need to put specific controls in place and evolve their identity access management (IAM) to prevent misuse and mitigate potential threats.

The unique risks of GenAI

You might be wondering: How are AI-powered agents different from the dozens of other apps I use daily?

First, they don’t operate in silos. Those helpful actions — buying stocks, booking concert tickets, etc. — are only possible when GenAI agents interact with third-party sites and services on your behalf.

Second, GenAI apps are built differently than traditional apps. They use elements like LLMs and vector databases instead of classic app architecture. These elements come with their own potential risks. In fact, the Open Worldwide Application Security Project (OWASP) recently compiled a list of the top 10 vulnerabilities for LLM-based applications like sensitive information disclosure and excessive agency.

Vulnerabilities like these can be addressed with a thoughtful, identity-based approach.

Four key considerations for secure AI integration

For companies looking to integrate GenAI agents into their products and services safely, here are four requirements to keep in mind:

  1. User authentication. Before an agent can display a user’s chat history or customize its replies based on their age, it needs to know who they are. This will require some form of identification, which can be done with secure authentication.
  2. Secure APIs. AI agents need to interact with other applications via APIs to take actions on a user’s behalf. As GenAI apps integrate with more products, calling APIs on behalf of end users — and doing so securely — will become critical.
  3. Async authentication. To complete complex tasks or waiting for certain conditions to be met — like booking airfare only when it drops below $200 — AI agents need extra time. That means running in the background for minutes, hours, or even days, with humans acting as supervisors who approve or reject actions only when notified by the chatbot, helping prevent excessive agency.
  4. Access controls. Most GenAI apps use a process called Retrieval Augmented Generation (RAG) to enhance the output of LLMs with knowledge from external resources, such as company databases or APIs. To avoid sensitive information disclosure, the retrieved content should only be data that the user can access. With proper authorization and access controls, you can prevent users from getting and sharing data they shouldn’t.

The path forward

To fully take advantage of GenAI’s potential, organizations must securely integrate GenAI into their applications and keep all four of these requirements in mind. However, finding ways to protect against these unique risks we’re seeing shouldn’t get in the way of innovation and deploying these GenAI agents even faster.

By taking advantage of modern identity solutions, app builders can more easily ensure AI agents have the necessary but minimal access to sensitive information and secure integrations, as well as create processes to keep humans in the loop and prevent overreach. We’re in the early days of an amazing revolution in how AI might make our lives easier, more productive, and more enjoyable. By building secure practices into AI products and processes from the start, companies can earn users’ trust and protect their privacy while delivering incredible experiences we can’t even imagine yet.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.