Nvidia Delivers World’s First AI Supercomputer To OpenAI
What do you need when you are the world’s leading non-profit AI research team? Obviously the world’s fastest AI computer and that is where Nvidia comes in with its gift to OpenAI, artificial intelligence researchers based in San Francisco.
Nvidia’s CEO Jen-Hsun Huang hand-delivered the world’s very first AI supercomputer in a box to OpenAI. The supercomputer, named Nvidia DGX-1, packs a massive 170 teraflops of computing power which is an equivalent of around 250 conventional servers.
According to Huang:
I thought it was incredibly appropriate that the world’s first supercomputer dedicated to artificial intelligence would go to the laboratory that was dedicated to open artificial intelligence.
The DGX-1 took 3,000 people, 3 years to build at a cost of around $2 billion but given OpenAI’s research scope, Nvidia seems to think it is money well spent as the researchers will be using it for some of the toughest AI problems right now.
OpenAI was founded only last year and in that short amount of time it has proven itself to be a true pioneer in this research field by potentially creating some of the most positive technologies humans have ever created. With the use of Nvidia DGX-1, these researchers have a huge advantage now and can explore problems which couldn’t be before at performance levels previously no one could have thought of.
Researchers today are limited by the computational power of their systems with their advancements depending upon GPUs being fast. Speed is an essential tool for deep learning and with the help of Nvidia DGX-1, OpenAI just got a huge boost in their capabilities.
Right now, if we’re training on, say, a month of conversations on Reddit, we can, instead, train on entire years of conversations of people talking to each other on all of Reddit. And then we can get much more data in terms of how people interact with each other. And, eventually, we’ll use that to talk to computers, just like we talk to each other.