Cloud service provider Lambda is working to build a GPU cloud for AI workloads. Credit: Shutterstock AI cloud service provider Lambda has scored a $320 million cash infusion to build out its GPU-based services, which provide AI training clusters made up of thousands of Nvidia accelerators. Lambda is the latest cloud company to offer GPU processing – instead of the standard CPU processing – dedicated to all things AI, particularly inference and training. Vultr, CoreWeave and Voltage Park are all offering similar cloud GPU services. Lambda is preparing to deploy “tens of thousands” of Nvidia GPUs, including the current top-of-the-line H100 Hopper accelerators as well as Nvidia’s forthcoming G200 GPU accelerators, which are set to double the performance of the H100. Lambda is also looking to deploy Nvidia’s hybrid GH200 CPU/GPU superchips. Lambda’s stated mission is to build “the #1 AI compute platform in the world,” and to accomplish this, “we’ll need lots of Nvidia GPUs, ultra-fast networking, lots of data center space, and lots of great new software to delight you and your AI engineering team,” it said in a statement announcing the funding. The $320 million Series C funding is led by a number of venture funds, including B Capital, SK Telecom, T. Rowe Price Associates, Inc., and existing investors Crescent Cove, Mercato Partners, 1517 Fund, Bloomberg Beta, and Gradient Ventures, among others. “With this new financing, Lambda will accelerate the growth of our GPU cloud, ensuring AI engineering teams have access to thousands of Nvidia GPUs with high-speed Nvidia Quantum-2 InfiniBand networking,” the company said. This is undoubtedly music to Nvidia CEO Jensen Huang’s ears, as he has been pushing the notion of dedicated AI data centers, called AI factories, that are populated entirely with GPUs rather than the x86 CPUs found in traditional data centers. Additionally, on the most recent earnings call after Nvidia’s blowout quarter, Huang talked at length about the benefits of expanding GPU processing to other fields besides just AI in a move to muscle in on x86 territory. Founded in 2012, Lambda has been working with GPU systems since 2017, when it first started to experiment with transformer models. Lambda offers co-location services specifically designed for dense deployments as well as resells access to Nvidia’s DGX SuperPODs. The latter is likely to be Lambda’s bread-and-butter, as it is much cheaper to rent AI hardware than purchase and maintain it. This is leading to a rise in AI as a service, which allows customers to rent time on AI-ready equipment rather than buy their own. The real challenge for Lambda may be getting the hardware at all. TSMC is making chips as fast as it can, but demand is enormous and a backlog of several weeks and months remains. Related content news AMD holds steady against Intel in Q1 x86 processor shipments finally realigned with typical seasonal trends for client and server processors, according to Mercury Research. By Andy Patrizio May 22, 2024 4 mins CPUs and Processors Data Center news Broadcom launches 400G Ethernet adapters The highly scalable, low-power 400G PCIe Gen 5.0 Ethernet adapters are designed for AI in the data center. By Andy Patrizio May 21, 2024 3 mins CPUs and Processors Networking news HPE updates block storage services The company adds new storage controller support as well as AWS. By Andy Patrizio May 20, 2024 3 mins Enterprise Storage Data Center news ZutaCore launches liquid cooling for advanced Nvidia chips The HyperCool direct-to-chip system from ZutaCore is designed to cool up to 120kW of rack power without requiring a facilities modification. By Andy Patrizio May 15, 2024 3 mins Servers Data Center PODCASTS VIDEOS RESOURCES EVENTS NEWSLETTERS Newsletter Promo Module Test Description for newsletter promo module. Please enter a valid email address Subscribe