Data center becomes Nvidia’s biggest business
Something we’ve been waiting for a decade and a half has just happened: the data center is now Nvidia’s biggest business. Bigger even than the games company it was founded for nearly three decades ago.
The rise of the data center business was no accident and is the result of very deliberate engineering and investment by Nvidia, and it has been a remarkable thing to watch. The existence of The next platform was largely made to chronicle the rise of new kinds of platforms that Nvidia has created since the emergence of the first Tesla GPU compute engines and the CUDA development environment for them.
And that, the establishment of Nvidia as an accelerator for HPC simulation and modeling and the evolution of AI training and inference, from identifying images of cats on the Internet to all sorts of data manipulation in its many forms up to the creation of new ideas that not possible with conventional programming, is perhaps only the beginning. Nvidia, like many others, aims to create immersive – intentionally plural – worlds of the metaverse superimposed on the physical reality we all inhabit.
There will be plenty of gnashing of teeth that Nvidia is predicting a weaker second quarter of its fiscal year 2023 over the next week, until the next crisis on Wall Street hits, but none of that has much meaning. of long-term importance. This weakness comes as no surprise, given the lockdowns in China and the war in Ukraine, and many IT vendors are feeling the pain. Case in point: Cisco Systems’ latest financial results, which we discussed recently.
The fact remains that Nvidia has a very strong gaming business and a very strong data center business, and it’s entering the world of general-purpose computing with its “Grace” Arm server chips and that will only further expand its total addressable market. Through its acquisition of Mellanox, it has a range of interconnects and DPUs to match its existing GPU compute engines and impending CPU compute engines, and of course it sells systems and clusters as well as the parts that OEMs and ODMs need to make their own.
In the quarter ending May 1, Nvidia’s overall revenue rose 46.4% to $8.29 billion, but net profit fell 15.4% to $1.62 billion , largely because of a $1.35 billion charge that Nvidia had to pay Arm Holdings for its failed attempt to acquire it. It may be a small price to pay for the tighter focus that Nvidia will now enjoy. The good news is that Nvidia has $20.34 billion in the bank and a total addressable market of around $450 billion, as it highlighted earlier this week in its Computex conference presentations in Taiwan.
In the first quarter of the fiscal year, Nvidia’s data center division recorded sales of $3.75 billion, up 83.1%, while the gaming division only grew by 31.2% to $3.62 billion. It’s hard to say whether data center will remain Nvidia’s dominant business going forward, or whether the two divisions will vie for position. Much depends on the nature and timing of the competition that Nvidia increasingly faces in these two markets, and how Nvidia fares as it builds a broader and deeper data center portfolio, including processors.
“Revenues from hyperscale and cloud computing customers more than doubled year-over-year, driven by strong demand for external and internal workloads,” said Collette Kress, Nvidia’s chief financial officer, during an interview. a call with Wall Street analysts. “Customers remain constrained in their infrastructure needs and continue to add capacity while trying to keep pace with demand.”
Our model suggests that of the data center revenue in the quarter, $2.14 billion came from hyperscaler and cloud builders, up 105%, while other customers – universities, governments, businesses and other service providers – increased by 60%, or $1.61 billion.
We used to have a way to see how much revenue the Mellanox business was contributing, but it’s very difficult to accurately estimate because InfiniBand and Spectrum networking are built into Nvidia’s systems and clusters. We have no doubt that the ConnectX network interface business remains strong, and Kress mentioned that sales of 25 Gb/s, 50 Gb/s and 100 Gb/s adapters were strong and accelerating the business. “Our networking products are still limited in supply, although we anticipate continued improvement throughout the year,” Kress added.
We have no doubt that the network unit is bigger than when Nvidia completed the acquisition of Mellanox two years ago, but we can’t say by how much. That could be 15% of total revenue and about a third of data center revenue, but we don’t have much confidence in that estimate except in the broadest sense, like over the last twelve months. HPC and AI businesses are inherently choppy, as is selling to hyperscalers and cloud builders.
What we can say is that its Compute & Networking group had sales of $3.67 billion, up 66.2% in the quarter, but its Graphics group grew “only by 33.8% to $4.62 billion.
Despite Nvidia only forecasting $8.1 billion in sales for the second quarter of fiscal 2023, co-founder and CEO Jensen Huang remained optimistic.
“We had record data center activity in the last quarter,” Huang said on the call. “We expect to have another record-breaking quarter this quarter, and we are quite excited for the second half. Artificial intelligence and data-driven machine learning techniques for writing software and extracting information from the vast amount of data that companies have is incredibly strategic for any company we know of. Because ultimately AI is about automating intelligence and most companies are about domain specific intelligence. We want to produce intelligence. And there are now several techniques that have been created that allow most businesses to apply their data to extract insights and automate a lot of the predictive things that they need to do and do it quickly .
Huang added that the networking industry is “highly supply-constrained” and demand is “really, really high.” The networking product offering, which relies on components from other vendors and not just Nvidia-engraved chips, is expected to improve each quarter through the remainder of the fiscal year. The “Hopper” GH100 GPU and its H100 accelerator, which come in PCI-Express 5.0 and SXM5 form factors, are expected to be available during the third fiscal quarter and will approach the end of the fiscal year, i.e. say December 2022 and January 2023. In the meantime, A100 is the data center powerhouse that still has GPU computing, and companies are buying as many of them as Nvidia may have made.
And now we’ll be looking to see when and if the Compute & Networking group can grow bigger than the Graphics group. So far, that doesn’t seem likely.