Elon Musk wants to make the world's largest supercomputer, using 100,000 Nvidia H100 GPUs

During Elon Musk's meeting with investors in May, Tesla CEO said he wanted to build the world's largest supercomputer called Gigafactory of Computing, using 100,000 Nvidia H100 GPUs for AI training.

Specifically, Elon Musk said that the xAI startup needs 100,000 specialized graphics cards to train and run the next version of the Grok chatbot. He plans to connect all these GPUs into a supercomputer and wants the "Gigafactory of Computing" to be operational next fall.

Elon Musk wants to make the world's largest supercomputer, using 100,000 Nvidia H100 GPUs Picture 1Elon Musk wants to make the world's largest supercomputer, using 100,000 Nvidia H100 GPUs Picture 1

Musk said he will be responsible for delivering the machine on time. xAI can cooperate with Oracle to develop this system to speed up progress.

According to Reuters, if the connection of 100,000 H100 GPUs is completed, "Gigafactory of Computing will be four times more powerful than the largest GPU cluster today, will become the largest supercomputer in the world,

Before Musk's statement, both xAI and Oracle have not commented.

Analysts say that if the "Gigafactory of Computing" becomes a reality, it could put xAI ahead of its competitors and attract more resources thanks to its computing power. This supercomputer also clearly reflects Musk's ambition and vision in promoting AI potential.

xAI was founded by Elon Musk last July. By early this year, the company announced that Grok 2 training required about 20,000 H100 GPUs, while 100,000 H100 chips were needed.

In March, Elon Musk announced that xAI would turn the Grok chatbot into open source, so that everyone could access the company's technology for free.

4 ★ | 1 Vote