Anirban Ghoshal
Senior Writer

Meta is working on its own chip, data center design for AI workloads

News
19 May 20232 mins
Artificial IntelligenceComputer ComponentsData Center

The Facebook parent said that it is working on a new AI-optimized data center design and the second phase of its 16,000 GPU supercomputer for AI research.

Data center corridor of servers with abstract overlay of digital connections.
Credit: Sdecoret / Getty Images

Facebook parent company Meta has revealed plans for the development of its own custom chip for running artifical intelligence models, and a new data center architecture for AI workloads.

“We are executing on an ambitious plan to build the next generation of Meta’s AI infrastructure and today, we’re sharing some details on our progress. This includes our first custom silicon chip for running AI models, a new AI-optimized data center design and the second phase of our 16,000 GPU supercomputer for AI research,” Santosh Janardhan, head of infrastructure at Meta, wrote in a blog post Thursday.

Meta’s custom chip for running AI models, called Meta Training and Inference Accelerator (MTIA), is designed to provide greater compute power and efficiency than CPUs on the market today, according to Janardhan.

MTIA is customized for internal workloads such as content understanding, feeds, generative AI, and ad ranking, the company said, adding that the first version of the chip was designed in 2020.

Meta’s announcement of the strides it is making to produce its own custom chips for running AI models comes at a time when other large technology companies — driven by the proliferation of large language models and generative AI —are either working on or have already launched their own chips for AI workloads

Earlier this month, news reports claimed that Microsoft was working with chip-maker AMD to develop its own chip for running AI workloads. AWS has also released its own chip for running AI workloads.

On its part, Meta also said Thursday that its new data center design will be optimized to train AI models, a process that enables them to better their performance as they ingest more data..

“This new data center will be an AI-optimized design, supporting liquid-cooled AI hardware and a high-performance AI network connecting thousands of AI chips together for data center-scale AI training clusters,” Janardhan wrote, adding that the new data center systems will be faster and more cost-effective to build than earlier facilities.

In addition to the new data center design, the company said that it was working on developing AI supercomputers that will support training of next-generation AI models, power augmented reality tools, and support real-time translation technology.

ENDS

Exit mobile version