Is this the beginning of the end for Nvidia’s artificial intelligence (AI) chip dominance?
Nvidia’s customers are stepping up their in-house AI chip development efforts, but investors need to look at the bigger picture.
nvidia (NVDA -10.01%) has been a huge beneficiary of the surge in demand for artificial intelligence (AI) applications starting in late 2022. Cloud computing giants are lining up to ensure the company’s data center graphics processing units (GPUs) can train and power large enterprises. Language Model (LLM).
for someone meta platform (meta -4.13%), microsoft, Amazon, and many others, Nvidia has been the provider of AI chips. It’s worth noting that these tech giants have been willing to wait a year from order to delivery to procure Nvidia’s chips, and pay top dollar for them. Other chip manufacturers such as intel and advanced micro devices (AMD -5.44%) By some estimates, Nvidia controls as much as 95% of the AI chip market.
As a result, Nvidia’s revenues and profits quickly multiplied. But some customers are making a concerted effort to reduce their reliance on chips.
Build your own AI chip
Nvidia’s success in the AI GPU market can be attributed to its A100 processor, launched in 2020. Graphics chip experts built this GPU for high-performance computing applications and it is manufactured using a 7 nanometer (nm) process node. OpenAI has reportedly deployed thousands of A100 chips to train ChatGPT.
Interestingly, rival AMD began offering the MI250X, a competing data center accelerator built on the 6nm process node, in late 2021. However, according to third-party estimates, the A100 reportedly outperforms the latest AMD chips on LLM training tasks.
Then, in 2022, Nvidia boosted gaming performance with its H100 processor, built on a custom 5nm process. The company packed 80 billion transistors on the chip, compared to 54 billion in the A100. The H100 turned out to be much more powerful than its predecessor. AMD, on the other hand, took until the end of 2023 to release its next competing chip, the MI300.
This explains why Nvidia’s H100 saw such huge demand last year that the company’s data center revenue will reach $47.5 billion in fiscal 2024, compared to $15 billion the previous year. Meta alone poured billions of dollars into Nvidia for the H100, and it wasn’t the only large buyer to do so.
But the lack of a compelling alternative to the H100, its high price and low availability explains why some of Nvidia’s top customers have begun internal AI chip development efforts to reduce their dependence on the chipmaker. For example, Meta Platforms recently announced the second generation of its own AI chips built on the 5nm process node.
According to Meta, the new chip “more than doubles the compute and memory bandwidth of the previous solution while maintaining tight alignment with workloads. It is designed to efficiently deliver ranking and recommendation models that deliver high-quality recommendations to users. .”
Additionally, Meta plans to continue its own chip development program to reduce the operating and development costs of its AI servers.
Something similar is happening at Microsoft. The tech giant unveiled two custom AI chips in late 2023, one of which is a 5nm AI accelerator called Maia 100. This AI chip contains 105 billion transistors and is built for LLM training and running AI workloads in the cloud. inference.
Amazon has also gone down the path of developing its own AI chips. Last November, it unveiled its latest product, Trainium2, which it claims is four times more powerful than its predecessor. Amazon Web Services customers can use these chips to train AI models. meantime, alphabet is jumping into this trend with its newly unveiled Axion custom AI processor.
Considering that Meta, Microsoft, Google, and Amazon were among the biggest buyers of H100 processors last year, their focus on developing their own chips undoubtedly poses a threat to the semiconductor giant’s bottom line.
But investors should focus on the bigger picture.
While it’s true that Nvidia’s customers are looking to reduce their dependence on Nvidia, the fact remains that they are expected to continue purchasing powerful GPUs. For example, when Nvidia announced the launch of its next-generation Blackwell AI GPUs last month, all of the companies mentioned above said they would deploy the new chips once they become available.
That’s not surprising. Nvidia’s future GPUs are expected to be much more powerful, allowing customers to train larger LLMs. The chipmaker claims that Blackwell GPUs can run LLM “at up to 25 times less cost and energy consumption than predecessors.” Given that these new GPUs are likely to be priced competitively compared to the H100, Nvidia customers will see a higher return on their AI hardware investments using Blackwell processors.
As a result, demand for Nvidia’s AI chips could continue to remain robust. Another reason why Nvidia can remain a dominant player in the AI chip market is because of its control over its supply chain. Nvidia’s customers and competitors are turning to the foundry giant. TSMC Although it is working to manufacture its own AI chips, Nvidia reportedly consumes 60% of TSMC’s advanced chip packaging capacity.
Of course, TSMC is looking to increase capacity to meet demand from Nvidia and other customers, but its GPU specialists will likely account for the lion’s share of the foundry’s additional output, given its already massive lead in the AI chip market. .
Therefore, even as other tech giants continue their chip development efforts, Nvidia will likely remain the top AI chip player for quite some time. Japanese investment bank Mizuho estimates that Nvidia could sell $280 billion worth of AI chips by 2027, with the overall market expected to reach $400 billion. Therefore, Mizuho expects Nvidia’s AI chip market share to decline over the next three years, while data center revenue is expected to increase significantly.
Therefore, Nvidia’s data center revenue will likely continue to grow at a healthy pace, thanks to the long-term growth opportunities in AI chips, even if the company loses market share. That’s why investors shouldn’t worry too much about the chip development progress of Nvidia’s customers. Instead, given the impressive catalysts it’s sitting on, they should view the recent decline as an opportunity to buy more stock.
Suzanne Frey, an Alphabet executive, is a member of The Motley Fool’s board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool’s board of directors. Randi Zuckerberg, a former director of market development, Facebook spokesperson and sister of Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool’s board of directors. Harsh Chauhan has no position in any of the stocks mentioned. The Motley Fool holds positions in and recommends Advanced Micro Devices, Alphabet, Amazon, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Intel and recommends the following options: Intel’s long January 2025 $45 call, Microsoft’s long January 2026 $395 call, Microsoft’s short January 2026 $405 call, and Intel’s short May 2024 $47 call. The Motley Fool has a disclosure policy.