Kong’s updated AI Gateway helps to secure AI model production deployments

AI Kong’s updated AI Gateway helps to secure AI model production deployments The application programming interface technology company Kong Inc. today announced an updated version of the Kong AI Gateway, introducing new features that it says will enable the necessary security and governance controls for enterprises to deploy generative artificial intelligence and AI agents in production. The latest edition of the Kong AI Gateway comes with automated retrieval-augmented generation or RAG pipelines that aim to fix AI hallucinations. There’s also a plugin for personally identifiable information, designed to help protect sensitive information, passwords and more in 18 different languages. Kong is a provider of API and microservices management tools, and connects these two vital components of AI across public clouds, Kubernetes environments and on-premises data centers. APIs are essential for modern applications, enabling them to communicate easily with other apps and web services. For example, when booking a flight using a service such as Skyscanner, it’s an API that allows that same reservation to appear immediately in the user’s Google Calendar. Kong’s technologies help simplify API management across multiple computing environments with one-click deployment mechanisms. The company has established itself as a clear leader in API management for traditional apps, and it’s keen to play a similar role for AI applications, which are also heavily reliant on APIs. That’s why it has built a dedicated AI Gateway specifically for AI applications. Launched last year, the Kong AI Gateway is designed to make it easier for companies to deploy AI plugins and large language models in any application. By providing security and governance, it can help companies to control the flow of data to any LLM or application and effectively manage consumption. With today’s update, Kong AI Gateway 3.10 is getting an AI RAG injector, which is aimed at solving the challenge of AI hallucinations, the term that describes when LLMs provide false or inaccurate information in response to a prompt. With this, the AI Gateway enables LLMs to automatically query a vector database to insert relevant data for any prompt on the fly to make sure it’s able to augment its knowledge with proprietary information sources. Kong says that by bringing RAG pipelines into the AI Gateway, it’s also enhancing security as it provides an additional protective layer for companies’ vector databases, which are used to store unstructured data in a format that AI models can easily understand. The move should also help to enhance developer productivity, Kong says, because it simplifies the process of integrating existing RAG pipelines with applications through its use of a no-code and low-code interface The other main new feature in the Kong AI Gateway is a new sanitization tool for personally identifiable information or PII. According to Kong, this makes it easy for teams to “sanitize” PII across 18 languages for many of the most widely used LLMs. Teams will be able to enforce this sanitization at the global platform level, so developers won’t have to add this capability manually to each application they build and deploy. The feature works similar to other LLM sanitization tools, either replacing sensitive PII with tokens or redacting it entirely. However, Kong also provides the option for the original sanitized data to be reinserted into the LLM’s response just before it reaches the end user. In this way, users can still receive the data they need, even if it’s too sensitive for the LLM to be allowed to see it. With these new features, Kong co-founder and Chief Technology Officer Marco Palladino says the AI Gateway can help companies to overcome two of the major hurdles in deploying AI applications in production. “With this latest version of Kong AI Gateway, we’re equipping our customers with the tools necessary to implement agentic AI securely and effectively, ensuring seamless integration without compromising user experience,” he said. “Moreover, we’re helping solve some of the biggest challenges with LLMs, such as cutting down on hallucinations and improving data security and governance.” Image: SiliconANGLE/Meta AI A message from John Furrier, co-founder of SiliconANGLE: Your vote of support is important to us and it helps us keep the content FREE. One click below supports our mission to provide free, deep, and relevant content. Join our community on YouTube Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts. “TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy THANK YOU