Global experts in both industry and academia will once again come together at the Deep Learning Summit & Deep Learning in Healthcare Summit in Boston, 23 & 24 May, to explore the latest advancements in deep learning, and discuss how to leverage new AI methods. Attendees will have access to both summits as well as to the Deep Dive track, which will provide technical insights to key topics explored. With access to 3 tracks, attendees will be able to delve into their favourite deep learning tools and have a better understanding of how best to leverage these tools.
Across all tracks, a widely explored deep learning tool at the summits is neural networks. A neural network is a type of computing architecture designed to somewhat mimic the biological neural networks that make up human brains. Neural networks can learn things using data alone, by mapping inputs to outputs. With this subfield of AI having large potential, both academia and industry are researching and applying neural networks into their work. If you’re interested in neural networks, here are the presentations you should be attending:
Brandon will be exploring how to interpret neural networks as well as give insights into how to use deep learning to solve the image processing problem.
"Deep neural networks are famously difficult to interpret. We'll take a tour of their inner workings to build an intuition of what's inside the black box and how all those cogs fit together. Then we'll use those insights as we step through an image processing problem with deep learning, showing at every step what the neural network is "thinking"."
Jonathan will take a look into using neural network-based force fields to bypass expensive quantum mechanics calculations in molecular dynamics simulation, to study material properties and physical mechanisms at the atomic level.
"Neural network-based force field has recently emerged as a way to bypass expensive quantum mechanics calculation in molecular dynamics simulation, which enables us to study material properties and physical mechanisms at the atomistic level. Despite fundamental advances in rotation-invariant symmetry function “fingerprint” data representation, the derivative fingerprints required for the atomic force calculation significantly increases the training and execution runtime required in this approach. In this talk, we present an algorithm to bypass the need for fingerprint derivatives and perform direct atomic force prediction which significantly reduces the computation efforts required for training and executing the neural network force field for molecular dynamics simulations."
Yi will discuss a method to detect benign epilepsy with centrotemporal spikes, most popular form of epilepsy in children, where it takes advantages of using 3 different data sources, to achieve the best prediction results.
"Deep learning has been successfully used in many applications such as computer vision, automatic speech recognition, natural language processing, audio recognition, and medical imaging processing and disease diagnosis. Recently, our group has designed a method to detect a type of epilepsy - benign epilepsy with centrotemporal spikes, which is the most popular epilepsy with children. In our method, we use three sources of data: hand-crafted features from MRI images based on doctors’ knowledge, 3D MRI images and 4D functional MRI images. The final prediction decision is obtained by fusing the three prediction results through another neural network. Our idea is to take advantages of all three data sources which have different strengths and important features to achieve the best prediction results. We have done many experiments which show that the proposed method is truly better than any existing prediction method. Future improvement including how to use more data sources will also be outlined in this talk."
Matthew will review the key properties of deep neural networks and how these properties are leveraged to deliver efficient inference on energy, computer, and space-constrained platforms.
"Deep neural networks are a key technology at the core of advanced audio and video applications. As these applications begin to migrate from large servers executing in the cloud to mobile and embedded platforms, they place significant demands on the underlying hardware platform. This talk will review the key properties of these models and how these properties are leveraged to deliver efficient inference on energy, compute, and space constrained platforms."
Jay will discuss deep learning approaches that he is currently working on to improve Twitter’s recommendations and conversational health, which include neural networks to train co-embeddings of new users and items.
"Presentation abstract: The cold start problem for new users is a classic challenge for recommender systems. In this talk, I will discuss some deep learning approaches that can be used to address this problem, including using neural networks to train co-embeddings of new users and items, and serving them in an efficient way at runtime via approximate nearest neighbor algorithms like LSH or HNSW. I will also touch on some of the difficulties of evaluating such models both offline and online in the context of A/B tests."