-->

Enterprise AI World 2024 Is Nov. 20-21 in Washington, DC. Register now for $100 off!

Vultr Expands Platform to Better Power Scalable, Cost-Efficient Agentic AI

Vultr, the world’s largest privately held cloud computing platform, is unveiling the expansion of its Vultr Serverless Inference platform, centering on providing the necessary infrastructure for supporting agentic AI. With more and more focus shifting on the utility of AI agents, Vultr’s latest release aims to lower the barrier for successful implementation with agile, scalable, high-performance computing resources at the data center edge.

At the core of this announcement is Vultr’s serverless technology accelerated by NVIDIA and AMD GPUs, delivering automated, real-time scalability of AI model inferences for optimized performance. Accompanied by the flexibility, choice, and freedom of Vultr’s platform, enterprises are empowered to merge their favored model with Vultr’s intelligent AI model serving.

This is particularly relevant for agentic AI implementation, where Vultr allows users to scale custom models powered by the customer’s own data sources without vendor lock-in or compromising IP, security, privacy, or data sovereignty, according to the company. By ensuring AI applications can deliver consistent, low-latency experiences, Vultr maximizes the potential of agentic AI.

“The growing importance of agentic AI calls for developing an open infrastructure stack that addresses the specific needs of enterprises and innovators alike, and Vultr now offers a compelling balance of performance, cost-effectiveness, and energy efficiency,” said Kevin Cochrane, chief marketing officer at Vultr. “As we expand our Serverless Inference capabilities, we’re offering enterprises and AI agent platforms alike a robust alternative to traditional hyperscalers to effectively deploy and scale agentic AI technologies at the global data center edge.”

Two key features of Vultr’s agentic AI-centered announcement are Turnkey RAG and an OpenAI-compatible API.

Turnkey RAG delivers customizable, accurate, and contextually grounded AI outputs by securely storing private data as embeddings in a vector database. From there, large language models (LLMs) can perform inference based on this data without compromising the security of sensitive information. With Turnkey RAG, AI agents are enhanced with dynamically accessible, fresh information that improves decision making and overall responsiveness, according to Vultr. Simultaneously, this capability eliminates the need of sending data to publicly trained models, keeping data secure while optimizing its utility.

The OpenAI-compatible API introduces greater cost efficiency for AI integration, lowering the cost per token compared to OpenAI’s offerings. Paired with Vultr’s global AI scaling infrastructure, CIOs can better optimize AI expenses. Additionally, the OpenAI-compatible API accelerates digital transformation by streamlining the integration of AI into existing systems, driving faster development cycles, more efficient experimentation, and faster time-to-market for agentic AI and AI capabilities, according to Vultr.

To learn more about Vultr’s latest release, please visit https://www.vultr.com/.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues