Technology

Nvidia’s keynote at GTC held some surprises

[ad_1]

SAN JOSE — “I hope you realize this is not a concert,” Nvidia President Jensen Huang told a very large audience, filling the SAP Center in San Jose. That’s how he presented what might be the exact opposite of a concert: the company’s GTC event. “You arrive at a developer conference. There’s going to be a lot of science describing algorithms, computer architecture, and math. I feel a lot of weight in the room; and suddenly, you’re in the wrong place.”

Maybe it wasn’t a rock concert, but the leather jacket worn by the 61-year-old CEO The third most valuable company in the world In terms of market value he certainly had a fair number of fans among the audience. The company was launched in 1993, with the goal of pushing general computing beyond its limits. “Accelerated computing” has become a rallying cry for Nvidia: wouldn’t it be great to make specialized, rather than general-purpose, chips and boards? Nvidia chips give graphics-hungry gamers the tools they need to play games at higher resolutions, at higher quality and at higher frame rates.

It’s probably not a huge surprise that Nvidia’s CEO has drawn parallels to a concert. The place was, in a word, very concert-y. Image credits: TechCrunch/Haj Camps

Monday’s keynote was, in a way, a return to the company’s original mission. “I want to show you the spirit of NVIDIA, the spirit of our company, at the intersection of computer graphics, physics and artificial intelligence, where they all intersect inside the computer.”

Then, over the next two hours, Huang did something rare: he learned. difficult. Anyone who came to the keynote expecting to pull a Tim Cook, with a slick, audience-focused keynote was bound to be disappointed. Overall, the keynote was tech-heavy, acronym-filled, and an unusual developer conference.

We need bigger GPUs

Graphics processing units (GPUs) are where Nvidia started. If you’ve ever built a computer, you’ve probably thought of a graphics card that slots into a PCI slot. This is where the journey began, but we have come a long way since then.

The company announced the all-new Blackwell platform, which is an absolute beast. The processor’s core was “pushing the limits of physics in how big a chip can get,” Huang says. It uses the power of two SIM cards, delivering speeds of up to 10 Tbps.

“I have about $10 billion worth of equipment here,” Huang said, holding up a Blackwell prototype. “The next step is going to cost $5 billion. Fortunately for all of you, it gets cheaper from there. Putting a bunch of these chips together could generate some really impressive power.”

The previous generation AI-enhanced GPU was called Hopper. Blackwell is between 2 and 30 times faster, depending on how you measure it. Huang explained that creating the GPT-MoE-1.8T model took 8,000 GPUs, 15 megawatts, and 90 days. With the new system, you can only use 2000 GPUs and use 25% more power.

These GPUs are pushing an impressive amount of data — which is a very good segue into another topic Huang talked about.

What then

Nvidia has rolled out a new set of tools for automakers working on self-driving cars. The company was already a major player in robotics, but it has doubled down on its efforts by providing roboticists with new tools to make their robots smarter.

The company also introduced Nvidia NIM, a software platform aimed at simplifying the deployment of AI models. NIM leverages Nvidia hardware as a foundation and aims to accelerate enterprise AI initiatives by providing an ecosystem of AI-ready containers. It supports models from various sources, including Nvidia, Google, and Hugging Face, and integrates with platforms like Amazon SageMaker and Microsoft Azure AI. NIM will expand its capabilities over time, including tools for AI-generated chatbots.

“Anything you can digitize: as long as there is some structure in which we can apply some patterns, that means we can learn the patterns,” Huang said. “And if we can learn patterns, we can understand meaning. When we understand meaning, we can generate it too. And here we are in the generative AI revolution.”

[ad_2]

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button