High-performance graphics cards, typically associated with gaming, have become popular over the last few years in an area many might not expect: artificial intelligence.
Many experts attribute recent acceleration of success in AI to a wider availability and use of graphics processing units (GPUs), as their advantages include cores designed for running multiple tasks simultaneously, which can efficiently handle the vector and matrix operations that are prevalent in deep learning.
Training and deploying state of the art deep neural networks is very computationally intensive, and, while modern GPUs offer high density computation, researchers need more than a fast processor – they also need optimized libraries, and tools to efficiently program so that they can experiment with new ideas.
, VP of Applied Deep Learning Research at NVIDIA
, joined us at the 2017 Deep Learning Summit
in San Francisco, to share expertise on GPUs and platforms for deep learning, as well as giving insights on the latest deep learning developments at NVIDIA. I asked him some questions at the summit to learn more about his work.
What motivated you to begin your work in deep learning?
As a graduate student, I was very interested in applications of parallel computing, and decided that machine learning would be one of the most impactful. At the time, neural networks were out of favour, and so I focused my research on other methods, but when deep learning started getting startlingly good results, it grabbed everyone’s attention. Around then, I was introduced to Andrew Ng, who was working at Stanford and Google back then, and we started collaborating to make better systems for deep learning. Since then, I’ve been all in on deep learning.
What are you currently working on at NVIDIA? What most excites you about this work?
I lead a team of researchers focused on building prototypes of new applications built on deep learning. NVIDIA’s software and hardware made a big impact on deep learning already, so it makes sense that as the next step, it would start applying deep learning to its own problems. There are a lot of relatively unexplored fields that are important to NVIDIA, so I’m eager to try a bunch of new projects out.
Which industries do you think will be most affected by neural networks?
Andrew Ng likes to say that “AI is the new electricity” – I like this slogan because it conveys my sense that every industry is going to be affected by AI, just as every industry was affected by electricity. Modern AI applications really gained a foothold at internet companies like Google, Baidu, and Facebook because they have hundreds of millions of users generating incredible data through their behaviour online, which enabled bootstrapping AI from research into real products. But we’re in the very beginning of AI applications. Health care, manufacturing, transportation, agriculture, logistics, construction, entertainment, the law and government – all these fields will gain significant new capabilities thanks to AI.
Which companies do you feel are making good strides in the area of deep learning?
Obviously, internet companies like Google, Baidu, Facebook, Twitter, Alibaba, Amazon, and Microsoft are doing great work in deep learning. A few other companies stand out, like Apple, IBM and Uber – and of course NVIDIA. There are a ton of startups trying deep learning applied to many new problems, and I’m excited to see their applications proliferate.
Tell us more about PASCAL GP100.
GP100 is the name of the biggest GPU NVIDIA currently makes. It has 60 Streaming Multiprocessors, each of which has 64 math units, so it can do 3840 math operations every clock cycle. It’s massively parallel – it can process up to 120,000 threads of independent work concurrently. In its fastest currently shipping configuration, as the Tesla P100, it can do 10.6 Trillion single precision floating-point operations per second, and has 732 GB/s of memory bandwidth to its off-chip stacked memory.
It also has a new interconnect, called NVLink, which provides 160 GB/s of bandwidth to other GPUs, which helps in scaling neural network training. The Pascal family of GPUs is quite scalable – although GP100 is a big chip, there are a variety of smaller GPUs that have lower power at lower performance targets. The family scales from <10 to 300 Watts, and has good software compatibility from the small GPUs to the large ones.
See the full events list here for summits and dinners focused on AI, Deep Learning and Machine Intelligence taking place in San Francisco, London, Amsterdam, San Francisco, Boston, New York, Singapore, Hong Kong, and Montreal!
|The next Deep Learning Summit with take place alongside the Deep Learning in Finance Summit in Singapore on 27-28 April. Meet with and learn from leading experts in speech and image recognition, neural networks and big data. Register to attend here.
Confirmed speakers include Jeffrey de Fauw, Research Engineer at DeepMind; Vikramank Singh, Software Engineer at Facebook; Nicolas Papernot, Google PhD Fellow at Penn State University; Brian Cheung, Researcher at Google Brain; Somnath Mukherjee, Senior Computer Vision Engineer at Continental; and Ilija Ilievski PhD Student at NUS. View more speakers and topics here.|