As Nvidia strives to democratize AI, here’s everything it announced at GTC 2023.

At this year’s GPU Technology Conference (GTC), Nvidia continued to advance its AI hardware, focusing on making its technology more accessible to businesses across industries and making it easier to develop generative AI applications like ChatGPT.

Below is a daily recap of major announcements made by the Santa Clara, California-based company, with links to in-depth coverage.

Rent AI Supercomputing Infrastructure with DGX Cloud

While Nvidia has been building AI hardware for quite some time, it has taken some time for the technology to become mainstream, in part because of the high cost. Back in 2020, his DGX A100 server box sold for $199,000. To change that, the company today announced DGX Cloud, a service that will allow businesses to access their AI supercomputing infrastructure and software through a web browser. It rents DGX server blocks, each with eight Nvidia H100 or A100 GPUs and 640GB of memory, and costs $36,999 per month per node.

Illustrations Nvidia AI Foundations
Nvidia AI Foundations are cloud services for languages, visual models, and biology.

Leveraging the power of DGX Cloud, the company also announced the launch of AI Foundations to help enterprises build and run their own generative AI models. The offering provides three cloud services, Nvidia says, NeMo for large language models (LLMs), Nvidia Picasso for images, videos, and 3D applications, and BioNeMO for scientific writing from biological data.

New hardware for AI findings and recommendations

Along with DGX and AI Foundations, Nvidia also introduced four inference frameworks designed to help developers quickly build custom applications for generative AI. This includes Nvidia L4 for AI video creation; Nvidia L40 for 2D/3D imaging; Nvidia H100 NVL to deploy large language models; and Nvidia’s Grace Hopper, which connects the Grace CPU and GPU Hopper via a high-speed, 900 GB/s coherent chip-to-chip interface for recommender systems built on giant datasets.

New inference platforms
Nvidia inference platforms

The company says the L4 can deliver 120 times more AI-based video performance than processors, combined with 99% higher power efficiency; while the L40 serves as the engine of Omniverse, delivering 7x the inference performance for Stable Diffusion and 12x the Omniverse performance of the previous generation.

Chip makers get cuLitho on Nvidia GTC

At the event, Nvidia CEO Jenson Huang took the stage to announce Nvidia’s cuLitho software library for computational lithography. The proposal, Huang explained, will enable semiconductor manufacturing enterprises to design and develop chips with ultra-small transistors and wires, as well as speed up time to market and improve energy efficiency in large data centers that operate 24/7 to manage the semiconductor manufacturing process.

“The chip industry is the backbone of almost every other industry in the world,” Huang said. “With lithography at the physical limit, NVIDIA introduced cuLitho and collaboration with our partners TSMC, ASML and Synopsys enables fabs to increase productivity, reduce carbon footprint, and lay the foundation for 2nm process technology and beyond.”

Finally, the company also announced partnerships with Medtronic and Microsoft. The former will lead to the development of a common AI platform for software-defined medical devices capable of improving patient care, he said. Meanwhile, the latter will see the host Microsoft Azure Nvidia Omniverse and Nvidia DGX Cloud.

The 2023 Nvidia GTC event will run until March 23rd.

Content Source

California Press News – Latest News:
Los Angeles Local News || Bay Area Local News || California News || Lifestyle News || National news || Travel News || Health News

Related Articles

Back to top button