Vultr Launches Cloud Inference to Simplify Model Deployment and Automatically Scale AI Applications Globally

Vultr Launches Cloud Inference to Simplify Model Deployment and Automatically Scale AI Applications Globally

Article content

New serverless Inference-as-a-Service offering available from Vultr across six continents and 32 locations worldwide

WEST PALM BEACH, Fla. — Vultr, the world’s largest privately-held cloud computing platform, today announced the launch of Vultr Cloud Inference. This new serverless platform revolutionizes AI scalability and reach by offering global AI model deployment and AI inference capabilities. Leveraging Vultr’s global infrastructure spanning six continents and 32 locations, Vultr Cloud Inference provides customers with seamless scalability, reduced latency, and enhanced cost efficiency for their AI deployments.

Advertisement 2

Story continues below

Article content

Article content

Today’s rapidly evolving digital landscape has challenged businesses across sectors to deploy and manage AI models efficiently and effectively. This has created a growing need for more inference-optimized cloud infrastructure platforms with both global reach and scalability, to ensure consistent high performance. This is driving a shift in priorities as organizations increasingly focus on inference spending as they move their models into production. But with bigger models comes increased complexity. Developers are being challenged to optimize AI models for different regions, manage distributed server infrastructure, and ensure high availability and low latency.

With that in mind, Vultr created Cloud Inference. Vultr Cloud Inference will accelerate the time-to-market of AI-driven features, such as predictive and real-time decision-making while delivering a compelling user experience across diverse regions. Users can simply bring their own model, trained on any platform, cloud, or on-premises, and it can be seamlessly integrated and deployed on Vultr’s global NVIDIA GPU-powered infrastructure. With dedicated compute clusters available on six continents, Vultr Cloud Inference ensures that businesses can comply with local data sovereignty, data residency, and privacy regulations by deploying their AI applications in regions that align with legal requirements and business objectives.

“Training provides the foundation for AI to be effective, but it’s inference that converts AI’s potential into impact. As an increasing number of AI models move from training into production, the volume of inference workloads is exploding, but the majority of AI infrastructure is not optimized to meet the world’s inference needs,” said J.J. Kardwell, CEO of Vultr’s parent company, Constant. “The launch of Vultr Cloud Inference enables AI innovations to have maximum impact by simplifying AI deployment and delivering low-latency inference around the world through a platform designed for scalability, efficiency, and global reach.”

See also  What worries Ken Griffin about people working at home

With the capability to self-optimize and auto-scale globally in real-time, Vultr Cloud Inference ensures AI applications provide consistent, cost-effective, low-latency experiences to users worldwide. Moreover, its serverless architecture eliminates the complexities of managing and scaling infrastructure, delivering unparalleled impact, including:

Article content

Advertisement 3

Story continues below

Article content

  • Flexibility in AI model integration and migration: Vultr Cloud Inference, users can get a straightforward, serverless AI inferencing platform that allows for easy integration of AI models, regardless of where they were trained. For models developed on Vultr Cloud GPUs powered by NVIDIA, in users’ own data center, or on another cloud, Vultr Cloud Inference enables hassle-free global inference.
  • Reduced AI infrastructure complexity: By leveraging the serverless architecture of Vultr Cloud Inference, businesses can concentrate on innovation and creating value through their AI initiatives rather than focusing on infrastructure management. Cloud Inference streamlines the deployment process, making advanced AI capabilities accessible to companies without extensive in-house expertise in infrastructure management, thereby speeding up the time-to-market for AI-driven solutions.
  • Automated scaling of inference-optimized infrastructure: Through real-time matching of AI application workloads and inference-optimized cloud GPUs, engineering teams can seamlessly deliver performance while ensuring the most efficient use of resources. This leads to substantial cost savings and reduced environmental impact, as they only pay for what is needed and used.
  • Private, dedicated compute resources: With Vultr Cloud Inference, businesses can access an isolated environment for sensitive or high-demand workloads. This provides enhanced security and performance for critical applications, aligning with goals around data protection, regulatory compliance, and maintaining high performance under peak loads.
See also  Allianz Increases Revenues by 4.5 % to 36.5 Billion Euros

“Demand is rapidly increasing for cutting-edge AI technologies that can power AI workloads worldwide,” said Matt McGrigg, director of global business development, cloud partners at NVIDIA. “The introduction of Vultr Cloud Inference will empower businesses to seamlessly integrate and deploy AI models trained on NVIDIA GPU infrastructure, helping them scale their AI applications globally.”

As AI continues to push the limits of what’s possible and change the way organizations think about cloud and edge computing, the scale of infrastructure needed to train large AI models and to support globally-distributed inference needs has never been greater. Following the recent launch of Vultr CDN to scale media and content delivery worldwide, Vultr Cloud Inference will provide the technological foundation to enable innovation, increase cost efficiency, and expand global reach for organizations around the world, across industries, making the power of AI accessible to all.

Advertisement 4

Story continues below

Article content

Vultr Cloud Inference is now available for early access via registration here. Learn more about Vultr Cloud Inference at NVIDIA GTC and contact sales to get started.

About Constant and Vultr

Constant, the creator and parent company of Vultr, is on a mission to make high-performance cloud computing easy to use, affordable, and locally accessible for businesses and developers worldwide. Vultr has served over 1.5 million customers across 185 countries with flexible, scalable, global Cloud Compute, Cloud GPU, Bare Metal, and Cloud Storage solutions. Founded by David Aninowsky and completely bootstrapped, Vultr has become the world’s largest privately-held cloud computing company without ever raising equity financing. Learn more at: www.vultr. com.

View source version on businesswire.com: https://www.businesswire.com/news/home/20240318972363/en/

logo

Contacts

Janabeth Ward
vultrpr@scratchmm.com

Article content

Comments

Join the Conversation

This Week in Flyers

Leave a Reply

Your email address will not be published. Required fields are marked *