“All our current ray tracing started in NVIDIA Research with prototypes that got our product teams excited. In an interview between recording sessions for the keynote, Dally expressed particular pride for the team’s pioneering work in several areas. The demo lets users instantly generate photorealistic landscapes using simple voice commands. He showed the first public demonstration that combines NVIDIA’s conversational AI framework called Riva with GauGAN, a tool that uses generative adversarial networks to create beautiful landscapes from simple sketches. “In a few generations our products will produce amazing images in real time using path tracing with physically based rendering, and we’ll be able to generate whole scenes with AI,” said Dally. He also delves into data science, AI and graphics. The three research projects make up just one part of Dally’s keynote, which describes NVIDIA’s domain-specific platforms for a variety of industries such as healthcare, self-driving cars and robotics. It’s already being put to the test at U.S. Legate couples a new form of programming shorthand with accelerated software libraries and an advanced runtime environment called Legion. It lets developers take a program written for a single GPU and run it on a system of any size - even a giant supercomputer like Selene that packs thousands of GPUs. In software, NVIDIA’s researchers have prototyped a new programming system called Legate. Optical links help pack dozens of GPUs in a system. For example, Dally showed a mockup (below) of a future NVIDIA DGX system with more than 160 GPUs. The team is collaborating with researchers at Columbia University on ways to harness techniques telecom providers use in their core networks to merge dozens of signals onto a single optical fiber.Ĭalled dense wavelength division multiplexing, it holds the potential to pack multiple terabits per second into links that fit into a single millimeter of space on the side of a chip, more than 10x the density of today’s interconnects.īesides faster throughput, the optical links enable denser systems. “We can see our way to doubling the speed of our NVLink and maybe doubling it again, but eventually electrical signaling runs out of gas,” said Dally, who holds more than 120 patents and chaired the computer science department at Stanford before joining NVIDIA in 2009. The research prototype is implemented as a modular set of tiles so it can scale flexibly.Ī separate effort seeks to replace today’s electrical links inside systems with faster optical ones. MAGNet uses new techniques to orchestrate the flow of information through a device in ways that minimize the data movement that burns most of the energy in today’s chips. That’s more than an order of magnitude greater efficiency than today’s commercial chips. Toward that end, NVIDIA researchers created a tool called MAGNet that generated an AI inference accelerator that hit 100 tera-operations per watt in a simulation. NVIDIA has more than doubled performance of GPUs on AI inference every year. “If we really want to improve computer performance, Huang’s Law is the metric that matters, and I expect it to continue for the foreseeable future,” said Dally, who helped direct research at NVIDIA in AI, ray tracing and fast interconnects. He described three projects as examples of how the 200-person research team he leads is working to stoke Huang’s Law - the prediction named for NVIDIA CEO Jensen Huang that GPUs will double AI performance every year. NVIDIA researchers are defining ways to make faster AI chips in systems with greater bandwidth that are easier to program, said Bill Dally, NVIDIA’s chief scientist, in a keynote released today for a virtual GTC China event.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |