Meta Begins Testing First In-House AI Training Chip
Meta has started testing its first in-house chip for training artificial intelligence systems, marking a significant step in its efforts to reduce reliance on external suppliers like Nvidia, according to two sources.

The social media giant has deployed the chip on a small scale and plans to expand production if the test proves successful. The move is part of Meta’s long-term strategy to lower infrastructure costs as it invests heavily in AI-driven growth.
Meta, which owns Facebook, Instagram, and WhatsApp, has projected total expenses of USD 114 billion to USD 119 billion for 2025, with up to USD 65 billion allocated to AI infrastructure.
One source said the new training chip is a dedicated accelerator, designed specifically for AI tasks, making it more power-efficient than traditional graphics processing units (GPUs). Taiwan-based chip manufacturer TSMC is producing the chip for Meta.
The test deployment began after Meta completed its first "tape-out" of the chip, a crucial step in silicon development that involves sending an initial design to a chip factory. This process typically costs tens of millions of dollars and takes three to six months, with no guarantee of success. If the test fails, Meta would need to diagnose the issue and repeat the tape-out process.
The chip is part of Meta’s Meta Training and Inference Accelerator (MTIA) series, which has faced setbacks in the past. The company previously scrapped a chip at a similar development stage. However, in 2023, Meta successfully deployed an MTIA chip for inference, which helps determine content recommendations on Facebook and Instagram.
Meta executives aim to use in-house chips for AI training by 2026. The training chip will initially support recommendation systems before expanding to generative AI applications, such as the Meta AI chatbot.
Chief Product Officer Chris Cox described the company’s chip development as a "walk, crawl, run" process. He noted that the first-generation inference chip for recommendations was a "big success."
Meta previously abandoned an in-house inference chip after a failed small-scale test and instead placed large orders for Nvidia GPUs in 2022. Since then, the company has remained one of Nvidia’s biggest customers, using GPUs to train AI models for recommendations, ads, and its Llama foundation model series.
The value of Nvidia’s GPUs has come under scrutiny as AI researchers question the effectiveness of scaling up large language models with more data and computing power. These concerns intensified after Chinese startup DeepSeek launched low-cost models in January that optimise computational efficiency by relying more on inference.
DeepSeek’s advancements triggered a global decline in AI stocks, causing Nvidia shares to drop by as much as 20% before recovering most of their losses. Investors continue to bet on Nvidia’s dominance in AI training and inference, though trade concerns have led to renewed stock fluctuations.
Meta is testing its first in-house AI training chip to reduce reliance on Nvidia.
The chip, produced by TSMC, is a dedicated AI accelerator for improved efficiency.
Meta aims to use in-house chips for AI training by 2026, starting with recommendation systems.
Source: REUTERS