When NVIDIA (NVDA) reports its profit from the second quarter on August 27, investors will focus strongly on the results of the company’s data center. After all, there, the chip giant is aware of the revenue in selling its high -power AI processors.
But the data center segment includes more than simple chip sales. It also represents some of the most important, albeit often neglected NVIDIA suggestions: its network technologies.
Compiled by their NVLINK, Infiniband and Ethernet solutions, NVIDIA network products are what allows its chips to communicate with each other, let servers talk to each other in massive data centers and ultimately ensure that end users can connect with all this to start AI.
“The most important part of the construction of a supercomputer is the infrastructure. The most important part is how you connect these computing engines together to form this larger unit of calculation,” explained Guilad Scheinner, Senior Vice President of NVIDIA.
NVIDIA CEO Jensen Huang attended the 9th edition of the Vivatech Commercial Exhibition at the exhibitions of Parc des de la Porte de Versailles on June 11, 2025 in Paris. (Photo from Chesnot/Getty Images) ·Chesnot via Getty Images
This also becomes some big sales. NVIDIA networks sales represent $ 12.9 billion from its $ 115.1 billion revenue from the Data Center in its previous fiscal year. This may not seem impressive to keep in mind that chip sales have brought $ 102.1 billion, but it darkens $ 11.3 billion, which the second largest segment of NVIDIA, Gaming, took for the year.
Through Q1, networks make up $ 4.9 billion from revenue from NVIDIA Data Center. And it will continue to grow as customers continue to build their AI capacity, whether it is at research universities or massive data centers.
“This is NVIDIA’s most rated part of a scale,” Depwater Asset Management’s manager Jean Munster told Yahoo Finance. “Generally, networking does not get attention because it is 11% of revenue. But it grows as a rocket ship.”
As for the AI explosion, NVIDIA Senior Vice President on Networks Kevin Deireling says the company has to work in three different types of networks. The first is its NVLINK technology, which connects GPU to each other in a server or multiple servers in a tall cabinet -like server stand, allowing them to communicate and increase overall performance.
Then there is Infiniband, which connects multiple server nodes in data centers to form what is essentially a massive AI computer. It is then the front storage and management network that uses Ethernet connectivity.
The CEO of NVIDIA JENSEN HUANG presents Grace Blackwell Nvlink72, as it delivers the main address of the Consumer Electronics (CES) in Las Vegas, Nevada on January 6, 2025 (Patrick T. Fallon/AFP through Getty images) ·Patrick T. Fallon via Getty Images
“These three networks are needed to build a giant AI-skewer or even a moderate corporate computer size, AI computer,” Deierling explained.
However, the purpose of all these different connections is not just to help the chips and servers communicate. They are also intended to allow them to do it as quickly as possible. If you are trying to start a series of servers as a single unit for calculations, they should talk to each other in a blink of an eye.
The lack of data going to graphic processors slows down the entire operation, slowing down other processes and affecting the overall efficiency of the entire data center.
“[Nvidia is a] Very different business without networking, “Munster explained.” The results that people who buy all NVIDIA chips [are] The desire would not have happened if it weren’t for their networking. “
And as companies continue to develop larger AI models and autonomous and semi-autonomous agent AI capabilities that can perform tasks for consumers, ensuring that these graphic processors work with each other, they are becoming more important.
This is especially true as inferencing – working with AI models – requires powerful systems for the data center.
The AI industry is in the midst of a brief rearrangement around the idea of infection. At the beginning of the AI explosion, thinking was that training AI models would require extremely powerful AI computers and that they would actually be a little less powerful.
This led to Wall Street trembling earlier this year, when Deepseek claims to have trained its AI models on the best NVIDIA chips. Thinking at the time was that if companies could train and manage their AI models of insufficient chips, then there was no need for the expensive high -power systems of NVIDIA.
But this story quickly turned, as the CHIP companies said that these same II models take advantage of the work of powerful AI computers, allowing them to think over more information faster than they would be during fewer advanced systems.
“I think there is still a misconception that the infection is trivial and easy,” Dierling said.
“It turns out that it is starting to look more and more like training until we get to [an] Agricultural workflow. So all these networks are important. Having them together, tightly combined with the CPU, GPU and DPU [data processing unit]All this is vital to make the infection a good experience. “
However, Nvidia’s rivals are touring. AMD seeks to take more market share from the company, and cloud giants such as Amazon, Google and Microsoft continue to develop their own AI chips.
Industrial groups also have their own competitive networking technologies, including Ualink, which aims to go with NVLINK, explained Forrester analyst Alvin Nguyen.
But for now, Nvidia continues to run the package. And as technology giants, researchers and businesses continue to fight the NVIDIA chips, the company’s business is hardly guaranteed to continue to grow.
Sign up for Yahoo Finance Week in a technical newsletter. ·Yahoofinance
Send an email to daniel Howley to dhowley@yahoofinance.com. Follow it in X/Twitter on @DanielhowleyS
For the most profit and analysis reports, the profits whisper and expectations and news about the company’s profits, click here
Read the most financial and business news from Yahoo Finance