Note, some programs use green bars if the closing price is greater than the opening price even if that day's close is lower than the prior day's close. For consistency in how I formulated rules for pocket pivots, if the close is below the prior day's close, that would be a red bar. On 6-28, NVDA closed lower than the prior day but its closing price was higher than its opening price, so depending on how you set your color bars, it could appear either red or green.
ChatGPT, one of the most popular generative AI apps, was trained with the help of 10,000 NVDA GPUs. Generative AI applications are expected to create the need for hundreds of thousands of GPUs generating a massive $451 billion in revenue by 2030. Rowan Cheung provides daily updates on the dozens of new apps and AI platforms that have been developed every 24 hours. The speed of development is beyond breakneck. OpenAI is using NVDA's A100 GPUs to power ChatGPT, and it will now deploy the latest-generation H100 Hopper graphics cards to train Microsoft's Azure supercomputer for AI research. Meta Platforms also deployed the H100 GPUs in its AI supercomputer known as Grand Teton to power both its training and inference of deep-learning models. Meanwhile, other generative AI platforms, such as Stability AI, Twelve Labs, and Anlatan, are tapping the H100 GPUs to train different kinds of applications.
AI systems and computational power during the last ten years have doubled every 6 months or so, significantly outperforming Moore’s Law. NVDA is the leader that will best address AI-driven hardware demand as it has a deep moat in terms of design patent protection with superior technology when compared against INTC and AMD, both whom have heat issues. NVDA also has an earlier mover advantage so both INTC and AMD lag behind.
At present, there is a severe CPU and GPU hardware bottleneck due to extreme demand. A large language model (LLM) which is a type of machine learning model that can perform a variety of natural language processing (NLP) tasks are the beating heart of generative AI such as ChatGPT. To address the predicted hardware bottleneck caused by LLMs, NVDA's GH200 platform has a whopping 144TB of "unified" memory. Within machine learning, we have differentiable computing which is a powerful way to figure out which computational models to run to learn from data more efficiently and effectively and even make predictions. This will revolutionize many areas such as supply chains thus will create many new jobs in software engineering, applied math, statistics, and data science. It can also learn from experience so it improves the accuracy and efficiency of software which replaces the limits of more rigid, rule-based software. Inevitably, this will exert even higher demand on hardware as it requires significant processing power. This technology is likely to overtake many traditional software applications in the coming years, particularly in areas such as image recognition, natural language processing, value chains, and autonomous systems. The increased demand for hardware benefits companies such as NVDA.
One caveat: Nvidia warned that if the United States imposes new restrictions on the export of AI chips to China, it would result in a “permanent loss of opportunities” for US industry. The company’s chief financial officer, Colette Kress, said she didn’t anticipate any “immediate material impact” but tighter curbs would impact earnings in the future. “Over the long-term, restrictions prohibiting the sale of our datacenter GPUs to China, if implemented, would result in a permanent loss of opportunities for US industry to compete and lead in one of the world’s largest markets.