Core42, a G42 company specialising in sovereign cloud, AI infrastructure, and digital services, recently announced the global launch of its Inference-as-a-Service offering, powered by Qualcomm Technologies, Inc.’s comprehensive platform.
Available through all Core42 data centres worldwide, the service enhances AI deployment and performance for Software-as-a-Service (SaaS) providers and generative AI developers by providing immediate access to essential models while simplifying the complexities of infrastructure management.
The rapid rise of generative AI applications—from image and code generation to chatbots and text summarization—is creating complexity for customers trying to stay ahead when it comes to the optimal infrastructure options to leverage. Scaling API calls while ensuring high performance has become increasingly challenging for users requiring robust computing power and AI expertise to streamline their AI pipelines and develop new applications.
Qualcomm Technologies’ platform, which powers Core42’s Inference-as-a-Service offering, addresses these challenges by combining AI inference accelerators, standardized APIs, and pre-built generative AI applications into an innovative, seamless service. This platform provides effortless access to the latest AI models and applications, ensuring optimal performance and significantly reducing operational costs.
“Our Inference-as-a-Service offering, already powered by the Core42 Compass API, is now further enhanced with Qualcomm Technologies’ end-to-end advanced inference-as-a-service platform”, said Raghu Chakravarthi, EVP of Engineering and GM Americas at Core42. “We are optimizing AI inference at scale to drive sustainability and deliver transformative outcomes across industries. This collaboration not only strengthens our technological capabilities but also accelerates our global expansion plans. By providing advanced AI solutions through our worldwide data centres, we are empowering businesses across the globe to innovate faster and more efficiently, positioning Core42 as a leader in the AI infrastructure space”.
Core42’s Inference-as-a-Service allows seamless integration of new AI models, enabling users to stay current with the latest advancements and easily expand their AI capabilities. The platform enables users to choose from optimised inference containers compatible with any orchestration platform, accelerated APIs, or a user-friendly UI. With high-availability containers that support autoscaling at both the server and model levels, the platform adapts to varying performance requirements seamlessly.
The offering also empowers developers at every level with pre-built generative AI applications for chat, image, and code generation, as well as tools to create custom applications using familiar frameworks. Powered by Qualcomm Cloud AI 100 Ultra inference accelerators, the platform delivers best-in-class performance per total cost of ownership dollar. Additionally, the solution’s programmability supports diverse data formats and advanced AI optimisation techniques, ensuring cloud services remain at the front-end of AI innovation.
“We are proud to support Core42 with a seamless, scalable solution for delivering powerful generative AI capabilities and making AI accessible – both easy to use and optimised performance per TCO,” said Rashid Attar, VP, Cloud Computing, Qualcomm Technologies, Inc. “At less than half the cost of alternatives, and all the convenience of a full-service solution, developers can stay ahead of the curve, positioning their businesses for the AI innovations of tomorrow without the burden of complex infrastructure management”.
SaaS providers and AI developers are invited to explore Core42’s platform and transform their AI capabilities. For more information and to access a free trial, visit https://bit.ly/Qualcomm-Core42-Playground
Image Credit: Core42