Elon Musk Buys Thousands of GPUs for Twitter’s Generative AI Project
Despite advocating a halt to AI training across the industry, Elon Musk has reportedly launched a major artificial intelligence project within Twitter. The company has already purchased about 10,000 GPUs and he is hiring AI talent from DeepMind for projects involving Large Language Models (LLM). business insider.
A source familiar with the matter said Musk’s AI project is still in its early stages. However, according to another individual, he gained significant additional computing power, suggesting his dedication towards moving the project forward. Meanwhile, the exact purpose of generative AI is unknown, but potential applications include improving search functionality and generating targeted advertising content.
At this time, it’s unclear exactly what hardware Twitter has procured. However, Twitter has reportedly spent tens of millions of dollars on these computing GPUs, despite ongoing financial problems that Musk describes as a “precarious financial situation.” These GPUs will be deployed in one of Twitter’s two remaining data centers, with Atlanta being the most likely destination. Interestingly, Musk closed his Twitter primary data center in Sacramento in late December. This clearly reduced the company’s computing power.
In addition to purchasing GPU hardware for generative AI projects, Twitter is hiring additional engineers. Earlier this year, the company hired Igor Babuschkin and Manuel Kroiss, engineers from Alphabet’s AI research firm DeepMind. Musk has been actively seeking talent in the AI industry to compete with OpenAI’s ChatGPT, at least since his February.
OpenAI used Nvidia’s A100 GPUs to train their ChatGPT bot and continues to run using these machines. So far, Nvidia has launched the H100 computing GPU, the successor to his A100. This is several times faster for about the same power. Twitter will likely use Nvidia’s Hopper H100 or similar hardware for its AI projects, but we’re speculating here. The company has not yet decided what its AI projects will be used for, so it’s difficult to estimate how many Hopper GPUs it will need.
When a big company like Twitter buys hardware, they buy it at a special price because they procure it in the thousands. On the other hand, Nvidia’s H100 boards can run over $10,000 per unit when purchased individually from retailers like CDW.