HPE introduces AI Cloud for large language models
The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once, said the company.
LAS VEGAS: Hewlett Packard Enterprise (HPE) has entered the AI cloud market through the expansion of its HPE GreenLake portfolio to offer large language models (LLMs) for any enterprise.
The new offering is the first in a series of industry and domain-specific AI applications with future support planned for climate modelling, healthcare and life sciences, financial services, manufacturing, and transportation, the company said during its event here.
With the introduction of HPE GreenLake for LLMs, enterprises can privately train, tune, and deploy large-scale AI using a sustainable supercomputing platform that combines HPE's AI software and market-leading supercomputers.
"We have reached a generational market shift in AI that will be as transformational as the web, mobile, and cloud," said Antonio Neri, president and CEO, at HPE.
Now, organisations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly, Neri added.
HPE GreenLake for LLMs will be delivered in partnership with HPE's first partner Aleph Alpha, a German AI startup, to provide users with a field-proven and ready-to-use LLM to power use cases requiring text and image processing and analysis.
Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLMs runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload, and at full computing capacity.
The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once, said the company.
The company is accepting orders now for HPE GreenLake for LMMs and expects additional availability by the end of this year, starting in North America with availability in Europe expected to follow early next year.