Deep Learning at Scale: Q&A with Naveen Rao, Nervana Systems

Deep learning has had a major impact in the last three years. Imperfect interactions with machines, such as speech, natural language, or image processing have been made robust by deep learning, which holds promise in finding usable structure in large data sets.

Despite this, training processes are lengthy and have proven to be difficult to scale due to constraints of existing compute architectures, and there is a need for standardized tools for building and scaling deep learning solutions. At the Deep Learning Summit in Boston, Naveen Rao, CEO & Co-Founder of Nervana Systems, will outline some of these challenges and how fundamental changes to the organization of computation and communication can lead to large advances in capabilities.

I caught up with Naveen ahead of the summit on 12-13 May to hear more about how deep learning is solving problems in different industries and what we can expect over the next 5 years.

How did you start your work in deep learning?
Amir Khosrowshahi, Arjun Bansal and I were investigating alternative computing architectures based on the brain (neuromorphic) at Qualcomm Research. All three of us have PhDs in neuroscience, but have backgrounds in computer science, physics, and computer engineering. With a multidisciplinary view on the current state-of-the-art of computing, it became clear that deep learning was the future, but existing computers had major shortcomings in training deep neural networks. Bringing together engineering disciplines and bio-inspiration, the founders set out to build machines that can effectively solve the biggest compute problem of our time: finding useful inferences in data.

What is Nervana doing to make deep learning more accessible?
Deep learning is quickly becoming the state of the art technique for solving complex problems in image classification, speech analysis and natural language processing and is continuously being applied to new domains and problem types. As both large and small companies discover the power of deep learning, Nervana is well positioned to leverage this interest and enable broader market adoption that power new solutions across various industries via the Nervana Platform.

We currently have two product offerings, Nervana Cloud and the neon deep learning framework. neon is Nervana’s open-source deep learning framework. It makes defining and using neural networks easy by providing appropriate high-level abstractions in the standard Python programming language and easily integrates with existing data pipelines. Unlike other open-source frameworks, neon is fully supported and maintained by Nervana. We apply a rigorous testing methodology consistent with being an enterprise platform. neon is the fastest deep learning framework available today, enabling data scientists to train a model in hours, not days. Nervana Cloud removes the complexities associated with developing deep learning solutions, allowing data scientists to quickly build, train and deploy deep learning-based AI solutions using internal data. 

What are the key factors that have enabled recent advancements in deep learning?
Put simply, it is the confluence of computing power, large datasets, and market demand that is driving deep learning advancements. With smaller datasets, simple regression techniques tend to work well enough and are easy to implement. However, when more data is thrown at these models, the model performance quickly hits a ceiling. Deep neural networks have much more representational capacity, enabling them to continue to learn on very large datasets. This process is very computationally intensive, necessitating specialized computing infrastructure.

What are the main types of problems now being addressed in the deep learning space?
The major classes of data problems that are easily addressed by deep learning are images, video, speech, and text. We find that those modalities of data can solve many different business problems, enabling Nervana’s deep learning platform across a broad range of industries. The industries that we are targeting first are financial services, medical/healthcare, agriculture, automotive, online services and energy.

Healthcare: At its core, medical diagnostics are a big data problem. There is an abundance of data consisting of medical images, lab test, clinical trials and physician notes. Nervana unlocks insight in this data to make healthcare more efficient and accessible. Specific healthcare use cases include treatment recommendation, medical image analysis and patient identification and cohort discovery for medical research and trials.

Agriculture: According to the United Nations’ Food and Agriculture Organization, food production must increase by 60% to feed the growing population expected to hit 9 billion by 2050. The explosion of data and the use of deep learning will help farmers increase productivity, increase efficiency and achieve this ambitious target. Specific use cases include: Real-time Plant Phenotyping - Nervana is propelling advances in the field of precision farming. Using deep learning techniques for image analysis and object classification, farmers are now able to accurately measure and characterize crops. This enables applications like plant thinning where robots deliver varying doses of fertilizer to each plant based on real-time plant phenotyping. Nervana can also be used for advanced plant breeding and predictive weather analytics.

Finance: Nervana provides financial institutions a complete solution for deploying deep learning as a core technology. Deep learning is broadly applicable to many financial industry data problems. Nervana’s platform acts as a central hub where state-of-the-art algorithms can be applied across business areas. Specific use cases: Detection of anomalies in a wide range of settings, including flagging fraudulent credit card transactions, identifying unusual activity in an exchange limit-order book, or predicting sudden regime changes in the securities markets. Nervana can also be used to integrate data from disparate sources, such as asset price time series, Twitter volume and sentiment, SEC filing documents, analyst reports, satellite imagery, as well as text, audio, and video news feeds to drive better business decisions.

Online Services: Online and mobile services generate a tremendous amount of customer data. Deep learning is driving innovation in areas like retail analytics, content discovery, and user-experiences.

Automotive: Deep learning is becoming the fundamental technology used to process the vast amounts of data coming from automobiles. Nervana’s solutions enable the development of the latest driver aids and autonomous vehicles. Specific use cases include: enabling reliable speech recognition and collecting and analyzing sensor data from automobiles can help car manufacturers better predict maintenance and servicing needs. Deep learning finds actionable insights to predict when components may fail.

Energy: To meet the world’s energy demands, it is imperative to act smarter about how resources are spent. Data-driven decisions on energy exploration and operations will better utilize the earth’s fossil fuels. Oil and gas prices can be very volatile but deep learning accurately predicts macro-economic trends to guide investments around exploration and production.

What developments can we expect to see in deep learning in the next 5 years?
“Recent advances in deep learning technology will help us solve many of the world’s most complex challenges,” said Steve Jurvetson, Partner at DFJ, “By developing deep learning solutions that are faster, easier and less expensive to use, Nervana is democratizing deep learning and fueling advances in medical diagnostics, image and speech recognition, genomics, agriculture, finance, and eventually across all industries."

Broad application of deep learning into all aspects of our lives will happen in the next 5 years. The way we access healthcare, shop, farm, or interact with other people will all be shaped by learning machines. Deep learning will allow us to better and more efficiently use resources and drive down the cost of services. In addition, our experiences with machines will be personalized as websites and devices adapt to our individual preferences.

On the research side, unsupervised learning will advance in the next 5 years. About 90% of all data in the world is unlabelled. This means there is no description of the meaning the data or what inferences can be drawn (images, sounds, GPS tracking data, exercise data, etc.). Unsupervised learning is the next big frontier for finding useful inferences in data.

What advancements excite you most in the field?
Personally, I think healthcare is an important area of advancement. The cost of care is skyrocketing, and machine learning can simultaneously bring down the price of care as well as improve the quality. Using machines to aid with diagnostics means that the same high level of competency can be easily applied to more people.

Naveen Rao will be speaking at the RE•WORK Deep Learning Summit in Boston, on 12-13 May 2016. Other speakers include Yoshua Bengio, Full Professor at Université de Montréal; Joseph Durham, Manager of Research & Advanced Development at Amazon Robotics; Vivienne Sze, Professor at MIT; Tony Jebara, Director of Machine Learning Research at Netflix. 

The Deep Learning Summit is taking place alongside the Connected Home Summit. For more information and to register, please visit the event website here.

Energy Deep Learning Cloud Computing Agtech AI Healthcare Deep Learning Summit Deep Learning Algorithms


    As Featured In


    Partners & Attendees

    Ibm watson health 3.001
    Rbc research.001
    Mit tech review.001
    Kd nuggets.001
    Maluuba 2017.001
    This website uses cookies to ensure you get the best experience. Learn more