Meta is developing its own AI chips to reduce its reliance on Nvidia.
Meta is accelerating its strategy of self-reliance in artificial intelligence infrastructure by beginning mass production of its own AI chip line. The goal of this plan is to serve large-scale data centers while reducing dependence on external hardware suppliers.
According to Yee Jiun Song, Vice President of Engineering at Meta, the demand for AI inference processing is growing rapidly. This is also an area where the company has been investing heavily recently.
Over the years, Meta has spent billions of dollars building its in-house chip design team. While still using solutions from major partners like Nvidia and AMD, the company is prioritizing the development of its own hardware because it can directly optimize it for its specific data types and workloads. This approach helps save energy and reduce operating costs for large-scale AI systems.
The new processors are part of the Meta Training and Inference Accelerator (MTIA) program – a hardware platform for AI training and inference. After testing the first two generations, MTIA 100 and MTIA 200, Meta has now developed a roadmap for the next four chip models: MTIA 300, MTIA 400, MTIA 450, and MTIA 500.
Among them, the MTIA 300 has officially entered mass production. This chip is designed to support research and development tasks and will serve as a technological foundation for future generations. Currently, the MTIA 300 is primarily used to train algorithms for ranking and recommending content, and systems that process data for hundreds of millions of Facebook and Instagram users every day.
The next version is the MTIA 400 , upgraded to better support generative AI models while maintaining its ability to serve research and algorithm optimization tasks. This chip can scale up to 72 accelerators and Meta assesses its performance as competitive with many current commercial solutions. The company says testing is complete and they are beginning to deploy the chip in data centers.
Next on the roadmap is the MTIA 450 , heavily focused on inference capabilities for generative AI. To improve performance, Meta has doubled the HBM memory bandwidth compared to the previous generation. According to the company, this improvement gives the MTIA 450 superior performance compared to many AI processors on the market. The chip is expected to enter mass production and widespread deployment in early 2027.
The final phase of the current plan is the MTIA 500. While still focused on AI inference, this version further increases HBM bandwidth by approximately 50% compared to the MTIA 450, and adds several improvements in handling low-precision data – a crucial factor for modern AI models. The chip is expected to be operational in the second half of 2027.
To develop the MTIA series, Meta collaborated with Broadcom in the design process and utilized the open-source RISC-V architecture . The physical manufacturing of these chips will be handled by the semiconductor company TSMC .
According to many industry experts, Meta's AI chip development speed is considered quite fast compared to the general standards of the semiconductor industry. This is even more noteworthy considering that Meta is primarily a social media company, not a traditional hardware manufacturer.
However, developing its own chips offers Meta a significant advantage. The company can design hardware tailored to its specific AI workloads and deploy innovations faster than if it relied entirely on external suppliers.
However, the path to hardware self-sufficiency is not easy. Designing custom chips requires significant costs and highly complex techniques. Over the past period, Meta has spent tens of billions of dollars buying GPUs from Nvidia and AMD, while also signing agreements to lease AI chips from Google to meet its growing computing needs.
This year alone, Meta plans to spend between $115 billion and $135 billion on capital investments. The majority of this budget will be dedicated to expanding its AI infrastructure and building new data centers to support the company's ambitions in artificial intelligence development.
You should read it
- ★ The most potential AI chip manufacturing company in the world has been poured an additional $ 200 million in funding
- ★ UNESCO Meta Collaboration Improves AI Translation Quality
- ★ The H100 AI chip helped Nvidia's stock price increase rapidly, surpassing Microsoft
- ★ TSMC is ready for the 5nm process, the first product will be the Apple A14 Bionic?
- ★ TSMC Achieves Breakthrough in 2nm Architecture Development Project