Elon Musk posted a photo of a supercomputer running a self-designed chip

Elon Musk shared the first photo of the Dojo supercomputer with the power equivalent to 8,000 Nvidia H100 chips using the D1 chip and commented that it is not too big but not trivial. Accordingly, this supercomputer will be operational by the end of this year.

According to the image shared by Elon Musk, the supercomputer running the D1 chip is almost complete with the front part covered with metal, the back part is a neatly arranged wiring system.

Elon Musk posted a photo of a supercomputer running a self-designed chip Picture 1Elon Musk posted a photo of a supercomputer running a self-designed chip Picture 1

Musk is building two parallel supercomputer clusters, both worth billions of dollars, for Tesla and startup xAI. In particular, the supercomputer for xAI with 100,000 Nvidia H100 GPUs to train AI Grok will be "the most powerful in the world", located in Memphis, Tennessee. Another smaller data center uses the company's self-designed D1 chip.

In early 2021, D1 was first revealed. This supercomputer was developed by Tesla and manufactured by TSMC on a 7 nm process with 50 billion transistors, aiming for a performance target of 322 teraflops (trillions of calculations per second).

According to Tesla's announcement, D1 is a system processor on wafer with a 5x5 array. The 25 ultra-high-performance chips are interconnected using integrated fan (InFO) technology provided by TSMC to operate as a single processor. Dojo D1 and later generations are fine-tuned for machine learning, video training, and self-driving technology in Tesla vehicles. The supercomputer in Memphis is mainly used to train xAI's Grok.

Previously, Musk also revealed that, using a Tesla Hardware 4 computer (HW4 - renamed Artificial Intelligence 4 (AI4)) and Nvidia GPU in his AI training system, the combination ratio is about 1: 2. Thus, there are about 90,000 Nvidia H100 chips plus 40,000 AI4 chips and Dojo D1 wafers in operation by the end of 2024.

4.5 ★ | 2 Vote