GPT-5 Training Will Cost $2.5 Billion and Start Next Year
Twitter user Martin Shkreli from NY posted today that GPT-5 will require an estimated $2.0-$2.5 billion for training. This training would involve 500,000 H100s Tensor Core GPU over 90 days or an alternative configuration. The training is set to commence next year.
OpenAI is actively working on enhancing GPT-4 with various capabilities, such as embodiment, agency, Socratic reasoning, knowledge graphs, world models, multimodality, planning, semantic interpretability, hive minds, control and boundedness, as well as smaller high-value tasks.
The scale of H100/A100 production raises questions. Will there be enough of these GPUs available for such a substantial undertaking? Approximately 1 million H100s are expected to be produced by the end of the year, and an estimated 5 million may be shipped the following year.
Regarding the cost, there’s a valid point about the GPUs. Including the cost of these GPUs in the training expenses might be misleading since they don’t become obsolete after the training process. Those GPUs alone could amount to $20 billion.
It’s worth noting that Chip Manufacturing firm Sustainable Metal Cloud’s (SMC) maximum production capacity for H100s is currently 15,000 units per month, but they have ramped up production to approximately 50,000 units per month.
In terms of electricity costs, they represent a relatively small fraction of the overall compute cost. To put it in perspective, 6,000,000 kWh would amount to roughly $1 million.
Acquiring 500,000 H100s by next year appears to be a challenging task, even with the support of Microsoft. Additionally, questions arise about the cost of inference if the training process is indeed as compute-intensive as suggested.
In the context of Nvidia’s market performance in 2023, it’s noteworthy that their success has reportedly tripled, surpassing $1 trillion. This growth can be attributed largely to the increased adoption of Nvidia chips in AI applications. However, it’s important to consider that U.S. export constraints have limited the sale of high-end AI chips in the Chinese market, which could impact manufacturing and training costs.
Nvidia is generating nearly a thousandfold profit percentage for each H100 GPU accelerator it sells, according to Barron’s senior writer Tae Kim. This means that Nvidia’s street-price of around $25,000 to $30,000 for each HPC accelerator covers the estimated $3,320 cost per chip and peripheral components. The cost analysis is unclear, but it is believed to be a matter of pure manufacturing cost. Nvidia’s R&D costs also need to be considered, as the development of chips like the H100 requires thousands of hours from specialized workers. However, Nvidia’s AI-accelerating products are already sold through until 2024, with the AI accelerator market expected to be worth around $150 billion by 2027.
The company is benefiting from its infrastructure and product stack, but budgets and opportunity cost may limit investments in other areas or limit risks in research and development venues.
Disclaimer
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.
About The Author
Damir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet.
More articlesDamir is the team leader, product manager, and editor at Metaverse Post, covering topics such as AI/ML, AGI, LLMs, Metaverse, and Web3-related fields. His articles attract a massive audience of over a million users every month. He appears to be an expert with 10 years of experience in SEO and digital marketing. Damir has been mentioned in Mashable, Wired, Cointelegraph, The New Yorker, Inside.com, Entrepreneur, BeInCrypto, and other publications. He travels between the UAE, Turkey, Russia, and the CIS as a digital nomad. Damir earned a bachelor's degree in physics, which he believes has given him the critical thinking skills needed to be successful in the ever-changing landscape of the internet.