Schedule

08:30

WELCOME

08:50

Alex Graves

Alex Graves, Google DeepMind

Neural Turing Machines

Neural Turing Machines

Neural Turing Machines extend the capabilities of neural networks by coupling them to an external memory matrix, which they can selectively interact with. The combined system embodies a kind of 'differentiable computer' which can be trained with gradient descent. This talk describes how neural Turing machines can learn basic computational algorithms such associative recall from input and output examples only.

Alex Graves, PhD A world-renowned expert in Recurrent Neural Networks and Generative Models. Alex has done a BSc in Theoretical Physics at Edinburgh, Part III Maths at Cambridge, a PhD in AI at IDSIA. Followed by postdocs at TU-Munich and with Prof. Geoff Hinton at the University of Toronto. Most recently Alex has been spearheading our work on Neural Turing Machines.

09:10

Koray Kavukcuoglu

Koray Kavukcuoglu , Google DeepMind

End­to­End Learning of Agents

End­to­End Learning of Agents

Reinforcement learning agents have achieved some successes in a variety of domains, however their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low­dimensional state spaces. In this talk I will explain a novel algorithm (Deep Q­Network) that combines deep learning and reinforcement learning to enable agents to derive efficient representations of the environment from high­dimensional sensory inputs, and use these to generalize past experience to new situations. The Deep Q­Network (DQN) algorithm achieves human level performance on ATARI 2600 domain operating directly on raw images and game scores.

Koray Kavukcuoglu, PhD, Principal Researcher. Trained and worked as an aerospace engineer, before doing a machine learning PhD at NYU with Yann LeCun. Whilst there, he co-wrote the Torch platform, one of the most heavily used machine learning libraries in the world. Following his PhD, Koray was a Senior Researcher at Princeton/NEC labs, where he worked on applying cutting edge ML techniques

Buttonlinkedin

09:30

Ben Medlock

Ben Medlock, Swiftkey

Fireside Chat with Swiftkey

As co-founder and CTO of SwiftKey, Ben Medlock invented the intelligent keyboard for smartphones and tablets that has transformed typing on touchscreens. The company’s mission is to make it easy for everyone to create and communicate on mobile.

SwiftKey is best known for its smart typing technology which learns from each user to accurately autocorrect and predict their most-likely next word, and features on more than 250 million devices to date. SwiftKey Keyboard for Android is used by millions around the world and recently went free on Google Play after two years as the global best-selling paid app. SwiftKey Keyboard for iPhone and iPad launched in September 2014, following the success of iOS note-taking app SwiftKey Note. SwiftKey has been named the No 1 hottest startup in London by Wired magazine, ranked top 5 in Fast Company’s list of the most innovative productivity companies in the world and has won a clutch of awards for its innovative products and workplace. Ben has a First Class degree in Computer Science from Durham University and a PhD in Natural Language and Information Processing from the University of Cambridge.

Buttontwitter Buttonlinkedin

09:50

Alison B Lowndes

Alison B Lowndes, NVIDIA

Deep Learnings Impact on Modern Life

Deep Learnings Impact on Modern Life

The 60 year old research field, within Artificial Intelligence, has recently exploded across both media and academia. Research breakthroughs are now filtering into almost every facet of human life; commercial and personal. What was apparently sci-fi – machines that can see, hear and understand the world around them – is fast becoming the norm, on a grand scale. We take a closer look at the reality of the perfect storm created by society’s big data and NVIDIA’s GPU computational power.

Deep Learning Solutions Architect and Community Manager EMEA. Very recent mature graduate in Artificial Intelligence (University of Leeds), combining technical and theoretical computer science with a physics background. Completed a very thorough empirical study of deep learning, specifically with GPU technology, covering the entire history and technical aspects of GPGPU with underlying mathematics. 25+ years in international project management and entrepreneurship, Founder Trustee of a global volunteering network (in her spare time) and two decades spent within the internet arena, provide her a universal view of any problem.

Buttontwitter Buttonlinkedin

10:10

COFFEE

10:30

Matthew Zeiler

Matthew Zeiler, Clarifai Inc

Leveraging Multiple Dimensions

Forevery: Deep Learning for Everyone!

Forevery is a free photo discovery app that takes you on a personalized journey through every memory saved on your camera roll. Using deep learning, our app automatically applies relevant tags for 11,000+ objects, ideas, themes, locations, and feelings to each picture so searching for every photo and rediscovering every memory is a snap! In addition to tagging, we build in the ability to teach the app custom concepts like your friends and family or your favorite sports team. Forevery learns what you care about most, auto-generating photo stories that make it easy to share with the people you care about most.

Clarifai was founded by Matt Zeiler, an U of Toronto and NYU alumnus who worked with several pioneers in neural networks, and Adam Berenzweig, who left Google after 10+ years where he worked on Goggles and visual search.

Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc. studied machine learning and image recognition with several pioneers in the field of deep learning at University of Toronto and New York University. His insights into neural networks produced the top 5 results in the 2013 ImageNet classification competition. He founded Clarifai to push the limits of practical machine learning, which will power the next generation of intelligent applications and devices.

Buttontwitter Buttonlinkedin

10:50

Max Welling

Max Welling, QUVA

Why Unsupervised (Deep) Learning is Important

Why Unsupervised (Deep) Learning is Important

Deep learning is mostly associated with supervised learning, that is: learning a mapping from features to labels. Acquiring labels is however often expensive and ignores lots of unlabeled data that is cheaply available. In this talk I will show how to train (deep) fully probabilistic “auto-encoder” models that use both labeled and unlabeled data and discuss how these can be extended to incorporate certain invariances. Finally, I will discuss how these models are important for applications of deep learning in domains such as healthcare where the number of data cases is often much smaller than the number of measured features.

Max Welling is a Professor of Computer Science at the University of Amsterdam and the University of California Irvine. In the past he held postdoctoral positions at Caltech (’98-’00), UCL (’00-’01) and the U. Toronto (’01-’03). He received his PhD in ’98 under supervision of Nobel laureate Prof. G. 't Hooft. Max Welling serves as associate editor in chief of IEEE TPAMI, one of the highest impact journals in AI (impact factor 4.8). He serves on the editorial boards of JMLR and JML and was an associate editor for Neurocomputing, JCGS and TPAMI. In 2009 he was conference chair for AISTATS, in 2013 he was be program chair for NIPS (the largest and most prestigious conference in machine learning), in 2014 he was general chair for NIPS and in 2016 he will be a program chair at ECCV. He received multiple grants from NSF, NIH, ONR, and NWO, among which an NSF career grant in 2005. He is recipient of the ECCV Koenderink Prize in 2010 and the best paper award at ICML 2012. Welling is currently the director of the master program in artificial intelligence at the UvA and he is a member of the advisory board of the newly opened Amsterdam Data Science Center in Amsterdam. He is also a member of the Neural Computation and Adaptive Perception Program at the Canadian Institute for Advanced Research. Welling’s research focuses on large-scale statistical learning. He has made contributions in Bayesian learning, approximate inference in graphical models, deep learning and visual object recognition. He has over 150 academic publications.

Buttontwitter Buttonlinkedin

11:10

Lior Wolf

Lior Wolf, Tel-Aviv University

Image Annotation using Deep Learning and Fisher Vectors

Image Annotation using Deep Learning and Fisher Vectors

We present a system for solving the holy grail of computer vision -- matching images and text and describing an image by an automatically generated text. Our system is based on combining deep learning tools for images and text, namely Convolutional Neural Networks, word2vec, and Recurrent Neural Networks, together with a classical computer vision tool, the Fisher Vector. The Fisher Vector is modified to support hybrid distributions that are a better fit natural language processing. Our method proves to be extremely potent and we outperform by a significant margin all concurrent methods.

Prof. Lior Wolf is a faculty member at the School of Computer Science at Tel-Aviv University. Previously, he was a post-doctoral associate in Prof. Poggio's lab at MIT. He graduated from the Hebrew University, Jerusalem, where he worked under the supervision of Prof. Shashua. Lior Wolf was awarded the 2008 Sackler Career Development Chair, the Colton Excellence Fellowship for new faculty (2006-2008), the Max Shlumiuk Award for 2004, and the Rothchild Fellowship for 2004. His joint work with Prof. Shashua in ECCV 2000 received the best paper award, and their work in ICCV 2001 received the Marr Prize honorable mention. He was also awarded the best paper award at the post ICCV 2009 workshop on eHeritage, and the pre-CVPR2013 workshop on action recognition. Prof. Wolf research focuses on computer vision and applications of machine learning and includes topics such as face identification, document analysis, digital paleography, and video action recognition.

Buttonlinkedin

11:30

Sven Behnke

Sven Behnke, University of Bonn

From the Neural Abstraction Pyramid to Semantic RGB-D Perception

Learning Semantic Environment Perception for Cognitive Robots

Robots need to perceive their environment to act in a goal-directed way. While mapping the environment geometry is a necessary prerequisite for many mobile robot applications, understanding the semantics of the environment will enable novel applications, which require more advanced cognitive abilities. In the talk, I will report on methods that we developed for learning tasks like the categorization of surfaces, the detection, recognition, and pose estimation of objects, and the transfer of manipulation skills to novel objects. By combining dense geometric modelling – which is based on registration of measurements and graph optimization – and semantic categorization – which is based on random forests, deep learning, and transfer learning – 3D semantic maps of the environment are built. Our team demonstrated the utility of semantic environment perception with cognitive robots in multiple challenging application domains, including domestic service, space exploration, search and rescue, and bin picking.

Prof. Dr. Sven Behnke is a full professor for Computer Science at University of Bonn, Germany, where he heads the Autonomous Intelligent Systems group. He has been investigating deep learning since 1997. In 1998, he proposed the Neural Abstraction Pyramid, hierarchical recurrent convolutional neural networks for image interpretation. He developed unsupervised methods for layer-by-layer learning of increasingly abstract image representations. The architecture was also trained in a supervised way to iteratively solve computer vision tasks, such as superresolution, image denoising, and face localization. In recent years, his deep learning research focused on learning object-class segmentation of images and semantic RGB-D perception.

Buttontwitter Buttonlinkedin

11:50

LUNCH

12:10

Bernardino Romera Paredes

Bernardino Romera Paredes, University of Oxford

Deep Holistic Image Understanding

Deep holistic image understanding

Image understanding involves not only object recognition, but also object delineation. This shape recovery task is challenging because of two reasons. First, the necessity of learning a good representation of the visual inputs. Second, the need to account for contextual information across the image, such as edges and appearance consistency. Deep convolutional neural networks are successful at the former, but have limited capacity to delineate visual objects. I will present a framework that extends the capabilities of deep learning techniques to tackle this scenario, obtaining cutting edge results in semantic segmentation (i.e. detecting and delineating objects), and depth estimation.

Bernardino is a post-doc in Torr Vision Group at University of Oxford. He received his PhD degree from University College London in 2014, supervised by Prof. Massimiliano Pontil and Dr. Nadia Berthouze. He has published in top-tier machine learning conferences such as NIPS, ICML and AISTATS, receiving several awards such as the Best Paper Runner-up Prize at ICML 2013, and the Best Paper Award at ACII 2013. During his PhD he interned at Microsoft Research, Redmond. His research focuses on multitask and transfer learning methods applied to computer vision tasks such as object recognition and segmentation, and emotion recognition.

Buttontwitter Buttonlinkedin

12:30

 Miriam Redi

Miriam Redi, Bell Labs Cambridge

The Subjective Eye of Machine Vision

Can Machines See The Invisible?

In this talk we will explore the invisible side of visual data, investigating how machine learning can detect subjective properties of images and videos, such as beauty, creativity, sentiment, style, and more curious characteristics. We will see the impact of such detectors in the context of web and social media. And we will analyse the precious contribution of computer vision in understanding how people and cultures perceive visual properties, underlining the importance of feature interpretability for this task.

Miriam Redi is a Research Scientist in the Social Dynamics team at Bell Labs Cambridge. Her research focuses on content-based social multimedia understanding and culture analytics. In particular, she explores ways to automatically assess visual aesthetics, sentiment and creativity, and exploit the power of computer vision in the context of web, social media, and online communities. Miriam got her Ph.D. at the Multimedia group in EURECOM, Sophia Antipolis. After obtaining her PhD, she was a Postdoc in the Social Media group at Yahoo Labs Barcelona and a Research Scientist at Yahoo London.

Buttontwitter Buttonlinkedin

12:50

Cees Snoek

Cees Snoek, QUVA

Video Understanding: What to Expect Today and Tomorrow?

Video Understanding: What to Expect Today and Tomorrow?

In this talk I will give an overview of recent advances in video understanding. For humans, understanding and interpreting the video signal that enters the brain is an amazingly complex task. Approximately half the brain is engaged in assigning a meaning to the incoming imagery, starting with the categorization of all visual concepts in the scene, like an airplane or a cat face. Thanks to yearly concept detection competitions, vast amounts of training data, and several artificial intelligence breakthroughs, categorization of video at the concept level has now matured from an academic challenge to a commercial enterprise. As a natural response, the academic community shifts the attention to more precise video understanding in the forms of localized actions, like phoning and sumo wrestling, as well as translating videos into single sentence summaries such as ‘a person changing a vehicle tire’ and ‘a man working on a metal crafts project’. We present recent results in these exciting new directions and showcase real-world retrieval with the state-of-the-art MediaMill video search engine, even for recognition scenarios where training examples are absent.

Cees Snoek is a director of QUVA, the joint research lab of the University of Amsterdam and Qualcomm on deep learning and computer vision. He is also a principal engineer at Qualcomm and an associate professor at the University of Amsterdam. He was previously visiting scientist at Carnegie Mellon University, Fulbright scholar at UC Berkeley and head of R&D at Euvision Technologies (acquired by Qualcomm). His research interests focus on video and image recognition. Dr. Snoek is recipient of several career awards, including the Netherlands Prize for ICT Research. Cees is general chair of ACM Multimedia 2016 in Amsterdam.

Buttontwitter Buttonlinkedin

13:10

COFFEE

13:30

Andrew Simpson

Andrew Simpson, University of Surrey

Perpetual Learning Machines: Deep Neural Networks with Brain-like On-The-Fly Learning & Forgetting

Perpetual Learning Machines: Deep Neural Networks with Brain-like On-The-Fly Learning & Forgetting

Deep neural networks are often compared to the brain. However, in reality, deep neural networks cannot do what the brain does – learn on the fly – they must be trained before being used and forever after they are frozen in a single state of knowledge, never to improve, never again to learn. In this talk I introduce the Perpetual Learning Machine – a new type of deep neural network that learns on the fly, remembers and forgets like a brain.

Andrew Simpson holds a PhD in Human Auditory Perception and is a former games industry software engineer. He is a Research Fellow in the Centre for Vision, Speech and Signal Processing at the University of Surrey and also holds the position of Honorary Research Associate at the Ear Institute, University College London. His main interests are artificial neural networks and signal processing for speech and music. Dr Simpson has published 10 papers on Deep Learning since January this year.

13:50

Sébastien Bratières

Sébastien Bratières, University of Cambridge

Deep Learning for Speech Recognition

Deep Learning for Speech Recognition

Speech technology makes it ever faster from research conferences to the consumer market. Deep learning accelerated this trend in 2010-2013. This talk will depict advances in speech technology, mainly due to deep neural net models, but not only. We’ll go through architectures in use today (DNN acoustic models, but also CNNs and more recently long-short-term memories). I’ll draw the connection to business issues such as the need for privacy-preserving (eg embedded) technology, or opportunities for small teams who don’t command huge computing clusters and masses of data. Finally, I’ll give an outlook on future directions: end-to-end speech recognition, the integration of spoken language understanding.

Sébastien Bratières has spent 15 years in the speech and language industry in different European ventures, starting from the EU branch of Tellme Networks (now Microsoft) to startups in speech recognition and virtual conversational agents. Today, Sébastien is engaged in a PhD in statistical machine learning with Zoubin Ghahramani at the University of Cambridge, UK, and consults for dawin gmbh, a German SME producing custom speech solutions for industry use.

Sébastien graduated with master’s degrees from Ecole Centrale Paris, France, in engineering, and from the University of Cambridge in speech and language processing.

Buttontwitter Buttonlinkedin

14:10

Panel Session: What Does the Future hold for Deep Learning?

14:30

REGISTRATION & LIGHT BREAKFAST

IMAGE RECOGNITION

VIDEO ANALYSIS

15:30

CONVERSATION & DRINKS

Jeffrey de Fauw

Jeffrey de Fauw, DeepMind

Deep Learnings Impact on Modern Life

Detecting Diabetic Retinopathy With Deep Learning

Diabetic retinopathy is when there is retinal damage in the eye due to diabetes, potentially leading to loss of vision and even blindness. In his talk Jeffrey will reflect on his experience of trying to build a model, using convolutional neural networks, to grade the severity of diabetic retinopathy in high-resolution fundus images (images of the back of the eye). He did this work in the context of the Kaggle Diabetic Retinopathy Detection competition where he finished fifth.

Jeffrey De Fauw studied pure mathematics at Ghent University before becoming more interested in machine learning problems through Kaggle competitions. Soon after he was introduced to (convolutional) neural networks and has since spent most of his time working with them. Besides always looking for challenging problems to work on, he has also become very interested in trying to find more algebraic structure in methods of representation learning.

Buttontwitter Buttonlinkedin

Sander Dieleman

Sander Dieleman, Google DeepMind

Deep Learnings Impact on Modern Life

Sander Dieleman is a research scientist at Google DeepMind and a PhD student in the Reservoir Lab at Ghent University in Belgium. The main focus of his PhD research is applying deep learning and feature learning techniques to music information retrieval (MIR) problems, such as audio-based music classification, automatic tagging and music recommendation.

Buttontwitter Buttonlinkedin

Martin Bryant

Martin Bryant, The Next Web

Moderator

Martin Bryant is Editor-in-Chief at The Next Web, a leading online publication covering Internet technology, business and culture, where he oversees the site's editorial output. Martin has a particular interest in European startups and the evolution of media. He spends much time travelling the continent to meet the people behind the technologies that will shape the future. He is also co-founder of TechHub Manchester, part of TechHub's international network of physical and virtual spaces for technology startups.

Buttontwitter Buttonlinkedin

John Henderson

John Henderson, White Star Capital

Moderator

John is an investor in early stage technology companies at White Star Capital where he has a particular focus on machine intelligence. His interest in the space stems back to his previous role heading up business development and operations at Summly, an NLP company which was acquired by Yahoo in May 2013. John has also held roles at Facebook, The Boston Consulting Group, Linklaters as well as co-founding Bush Campus in 2005.

Buttontwitter Buttonlinkedin

Tony Robinson

Tony Robinson, Speechmatics

Panelist

Dr Tony Robinson obtained his PhD from Cambridge University Engineering Department in 1989. For the next decade he led the connectionist speech recognition research group in the university. He started his first company in 1995 and has founded or been involved with a large number of start-ups in the last two decades - including SpinVox, Softsound and Autonomy - mostly in the area of speech recognition and machine learning. He is pleased that the techniques he pioneered in the 1990s are now in vogue. His passion is the application of machine learning algorithms to tasks that traditionally had been considered impossible for computers to solve.

Buttontwitter Buttonlinkedin

Daniel Hulme

Daniel Hulme, Satalia

Panelist

Daniel is the CEO of Satalia (NPComplete Ltd) a UCL spin-out that provides AI inspired solutions to solve industries hardest problems. He’s the co-founder of the Advanced Skills Initiative that transitions scientists into industry as data scientists.

Daniel has a Masters and Doctorate in AI from UCL, and is Director of UCL’s Business Analytics MSc; applying AI to solve business/social problems. He lectures in Computer Science and Management Science at UCL and PCL.

Daniel has Advisory and Executive positions in many companies, he holds an international Kauffman Global Entrepreneur Scholarship and actively promotes entrepreneurship and technology innovation across the globe.

Buttontwitter Buttonlinkedin

Alison B Lowndes

Alison B Lowndes, NVIDIA

Compère

Deep Learnings Impact on Modern Life

The 60 year old research field, within Artificial Intelligence, has recently exploded across both media and academia. Research breakthroughs are now filtering into almost every facet of human life; commercial and personal. What was apparently sci-fi – machines that can see, hear and understand the world around them – is fast becoming the norm, on a grand scale. We take a closer look at the reality of the perfect storm created by society’s big data and NVIDIA’s GPU computational power.

Deep Learning Solutions Architect and Community Manager EMEA. Very recent mature graduate in Artificial Intelligence (University of Leeds), combining technical and theoretical computer science with a physics background. Completed a very thorough empirical study of deep learning, specifically with GPU technology, covering the entire history and technical aspects of GPGPU with underlying mathematics. 25+ years in international project management and entrepreneurship, Founder Trustee of a global volunteering network (in her spare time) and two decades spent within the internet arena, provide her a universal view of any problem.

Buttontwitter Buttonlinkedin

Fireside Chat with Swiftkey

SPEECH RECOGNITION & DEEP LEARNING ADVANCEMENTS

08:30

Wally Trenholm

Wally Trenholm, Sightline Innovation

The Commercialisation of Deep Learning

The Commercialisation of Deep Learning

The world has not seen a more disruptive and powerful technology since the inception of the internet itself. Deep Learning is going to transform every single industry that it touches. In manufacturing industries, the application of deep learning is as powerful as the introduction of robotics with the ability to automate higher level human decision making. Tasks such as quality inspection, process monitoring and production analysis are areas that rely heavily on humans but continue to be plagued with problems.

Similarly in medical diagnostics, the infrastructure around early detection and lab testing is ripe for transformation. This presentation will discuss how Sightline is applying the power of deep learning directly to these industries and affecting change to solve real problems with their Deep Learning cloud engine - Sightline Cortex.

Wally is a technology visionary and serial entrepreneur who sold his previous company to Research In Motion. He has over 25 years of programming and 18 years of management experience. With Sightline Innovation Wally has connected complex science and business with the goal of creating a leading global technology company around Deep Learning.

As the Founder and CEO of Sightline Innovation, he has built a company focused on practical deep learning solutions for medical diagnostics and manufacturing. In a few short years, Sightline Innovation has already been successful at selling and deploying its deep learning products in commercial settings, and built a powerful technology force to support it.

Buttontwitter Buttonlinkedin

08:50

Paul Murphy

Paul Murphy, Clarify

Deep Learning & Speech: Adaptation, the Next Frontier

Deep Learning & Speech: Adaptation, the Next Frontier

The speech community is finally excited about deep learning, but we’re proceeding with caution. Adaptation is critical to understanding real-world speech data. We need to adapt to acoustics and language of course, but also to context. To date, DNNs have shown great promise, but their ability to adapt to the unexpected is still in question. This talk will look at where we are today, as well as the challenges still in front of us.

Paul Murphy is one of Clarify's founders and its CEO. Paul's career in software operations industry has spanned twenty years and three continents. Ten years were dedicated to understanding and building large systems on Wall Street for clients like J.P. Morgan and Salomon Brothers. Paul's work in this area allowed him to explore a broad range of computing solutions, from mainframes to web services, and the gamut of space-time tradeoffs required by dissimilar front and back office systems. Thirteen years ago, Paul moved to London to work at Adeptra, a pioneer in the use of automated outbound calling in the area of credit card fraud detection and prevention. As Adeptra's CTO, he developed all of the software which enabled Adeptra to place intelligent interactive outbound calls on behalf of clients. These systems made extensive use of text-to-speech and voice recognition technology. Since then Paul has dedicated his time to developing technologies that leverage emerging voice processing techniques.

Buttontwitter Buttonlinkedin

09:10

Appu Shaji

Appu Shaji, EyeEm

Deep Learning for Real Photography

Recording The Visual Mind: Understanding Aesthetics with Deep Learning

With the rise of mobile cameras, the process of capturing good photos has been democratized - and this overload of content has created a challenge in search. One of the important aspects of photography is that every image communicates with a different audience in different form. This talk will address how we use computer vision techniques at EyeEm measure visual aesthetics in photography -and beyond that- personalize the image search experience to find the photos you personally find beautiful.

Appu is the Head, Research & Development at EyeEm. His first company, sight.io, was acquired by EyeEm in 2014 and also held post-doctoral positions at EPFL working alongside Prof. Sabine Süsstrunk and Prof. Pascal Fua. In 2009, Appu obtained his Ph.D. from IIT Bombay, where he was awarded best thesis from Computer Science Dept. He was also selected as one of the most promising 20 entrepreneurs of Switzerland in 2013. His research has appeared in top computer vision journals and conferences such as TPAMI, CVPR, and ACM Multimedia etc.

Buttontwitter Buttonlinkedin

09:30

Nic Lane

Nic Lane, UCL

Squeezing Deep Learning onto Wearables & Phones

Deep Learning for Embedded Devices: The Next Step in Privacy-Preserving High-Precision Mobile Health and Wellbeing Tools

State-of-the-art models that, for example, recognize a face, track emotions, or monitor activity are increasingly based on deep learning principles. But bleeding-edge health tools, like smartphone apps and wearables, that require such user information must rely on less reliable learning methods to locally process data because of the excessive device resources demanded by deep models. In this talk, I will describe our research that drives towards a complete rethinking of how existing forms of deep learning executes at inference time on embedded health platforms. Not only does this cause radically lower energy, computation and memory requirements; it also significantly increases the utilization of commodity processors (e.g., GPUs, CPUs) -- and even emerging purpose-built hardware, when available.

Nic Lane is a Principal Scientist at Bell Labs where he is a member of the Internet of Things research group. Before joining Bell Labs, he spent four years as a Lead Researcher at Microsoft Research based in Beijing. Nic received his Ph.D. from Dartmouth College (2011), his dissertation pioneered community-guided techniques for learning models of human behavior. These algorithms enable mobile sensing systems to better cope with diverse user populations and conditions routinely encountered in the real-world. More broadly, Nic's research interests revolve around the systems and modeling challenges that arise when computers collect and reason about people-centric sensor data. At heart, he is an experimentalist who likes to build prototype sensing systems based on well-founded computational models.

Buttontwitter Buttonlinkedin

09:50

Juris Puce

Juris Puce, Kleintech

The Challenges of Human Labour Automatisation with Deep Learning in the Transport Industry

The Challenges of Human Labour Automatization with Deep Learning in the Transport Industry

The key idea behind deep learning is to automate human labour, reduce the costs and reduce the time and increase precision in which a task can be done. In the transport industry things like cargo number recognition and counting of objects were first to be automated, and are now improved to a very high precision. However, there are many other tasks in the industry that can be automated, but computers are currently lacking the precision to guarantee the appliance with the industry security standards. This presentation will discuss how we have overcome some of the challenges and give an insight of the upcoming applications and their effects on industry.

Juris Pūce an adventurous entrepreneur, always looking for new challenges and business to build. Interested in all things technologically innovative and somewhat unknown, hence most of his companies are IT related. With over 15 years of experience in technology related business management, Juris Pūce currently divides his work between being a visionary for various start-ups as well as being the CTO of KleinTech, a company that specialises in complex machine vision and deep learning technology solutions for transport and security industries.

Buttontwitter Buttonlinkedin

10:10

Eiso Kant

Eiso Kant, source{d}

Using Neural Networks To Predict Developers' Chances to Get Hired

Source Code Abstracts Classification Using CNN

Convolutional neural networks (CNN) are becoming the standard approach for many machine learning related problems. Usually, those problems are related to images, audio or natural language data. At source{d} we are trying to apply the common and novel deep learning patterns to the problems with software developers and projects as the input which is something very different. We are standing at the beginning of our fascinating journey, but already have something to share. In this particular talk I am going to present the bits of our SourceNN deep neural network that enable classification of short source code fragments (50 lines) taken randomly from several projects. The input features are extracted by a syntax highlighter and look similar to minimaps in source code editors.

Eiso Kant is the co-founder & CEO of source{d}.

Buttontwitter Buttonlinkedin

10:30

LUNCH

10:50

Jörg Bornschein

Jörg Bornschein, CIFAR

Combining Directed & Undirected Generative Models

Combining Directed & Undirected Generative Models

In this talk I will present a new method for training deep models for unsupervised and semi supervised learning. The models consist of two neural networks with multiple layers of stochastic latent units. The first network supports fast approximate inference given some observed data. The other network is trained to approximately model the observed data using higher-level concepts and causes. The learning method is based on a new bound for the log-likelihood and the trained models are automatically regularized to balance between the requirement of making the job for both these models as easy as possible.

Jorg Bornschein is a Global Scholar with the Canadian Institute for Advanced Research (CIFAR) and postdoctoral researcher in Yoshua Bengio’s machine learning lab at the University of Montreal. He is currently concentrating on unsupervised and semisupervised learning using deep architectures. Before moving to Montreal Jorg obtained his PhD from the University of Frankfurt working on large scale bayesian inference for non-linear sparse coding with a focus on building maintainable and massive parallel implementations for HPC clusters. Jorg was also chair and one of the founders of the german hackerspace “Das Labor” which was awarded in 2005 by the federal government for promoting STEM programs to prospective students.

Buttontwitter

11:10

Marie-Francine Moens

Marie-Francine Moens, KU Leuven

Learning Representations for Language Understanding: Experiences from the MUSE Project

Machine Understanding of Language: How Can a Machine Learn?

The lecture presents the main findings of the EU MUSE project, which has focused on natural language understanding. MUSE automatically translates the content of children’s stories into events happening in a virtual world. We discuss the importance of world and common sense knowledge in language understanding. The lecture will then describe machine learning methods on how to acquire this knowledge, among which is the acquisition based on other textual and visual data. We conclude with a demo on language understanding.

Marie-Francine Moens is a professor at the department of Computer Science of KU Leuven, where she heads the Language Intelligence and Information Retrieval group (http://www.cs.kuleuven.be/groups/liir/). She is author of more than 280 international peer reviewed publications and of several books. She is involved in the organization or program committee (as program chair, area chair or reviewer) of major conferences on computational linguistics, information retrieval and machine learning. In 2011 and 2012 she was appointed as chair of the European Chapter of the Association for Computational Linguistics (EACL). She is the scientific manager of the EU COST action iV&L (The European Network on Integrating Vision and Language). She was appointed as Scottish Informatics and Computer Science Alliance (SICSA) Distinguished Visiting Fellow in 2014.

Buttonlinkedin

11:30

END OF SUMMIT

11:50

REGISTRATION & LIGHT BREAKFAST

12:10

WELCOME

STARTUP SESSION

DEEP LEARNING APPLICATIONS

13:10

COFFEE

13:30

John Overington

John Overington, Benevolent.ai

Artificial Intelligence in Drug Discovery

AI is Changing the Drug Discovery Paradigm

Drug discovery is a challenging business, despite huge societal and commercial benefits in the discovery of new drugs it is incredibly challenging to develop discover and develop new therapies, with typically around 30 new drugs developed per year from the entire worldwide pharma and biotech R&D budget. The reasons for this are complex, but the bottom line is that the vast majority of started projects do not successfully finish, there is huge attrition from the idea of a scientist through discovery and clinical development stages. We are developing powerful real world evidence-based artificial intelligence solutions to address drug discovery. Key to recent progress is the availability of large quantities of data, high performance computing and developments in deep-learning approaches to mine for hypotheses that can be rationally scored and prioritised for success.

John studied Chemistry at Bath, graduating in 1987. He then studied for a PhD at Birkbeck College, on protein modelling, followed by a postdoc at ICRF (now CRUK). John then joined Pfizer, eventually leading a multidisciplinary group combining rational drug design, informatics and structural biology. In 2000 he moved to a start-up biotech company, Inpharmatica, where he developed the drug discovery database StARLite. In 2008 John moved to the EMBL-EBI, where the successor resource is known as ChEMBL. Most recently John joined Benevolent.ai, where he continues his research as director of bioinformatics. In this role, John is involved in integrating deep learning and other AI approaches to drug target validation and drug optimisation

Buttontwitter Buttonlinkedin

13:50

Rodolfo Rosini

Rodolfo Rosini, Weave.ai

The Last Mile of AI

The current batch of AI startups are being driven by mobile technology, even the ones outside the mobile industry, but how is that sustainable in the long term? Where is the growth? How is it possible to compete against incumbents and which areas to avoid. Rodolfo will introduce a investment thesis framework to evaluate AI startups both from an investor and from an entrepreneur's perspective. Whether it is being considered starting an AI company, funding one or expanding an existing business in that direction this framework will help evaluate opportunities in an agnostic approach regardless of the vertical market.

Rodolfo is the co-founder and co-CEO of Weave.ai and previously founder at Storybricks. His mission is about using AI to improve the user experience on mobile for the enterprise market. Rodolfo is a serial entrepreneur having taken multiple startups from inception to market, recruited their management teams, raised VC funding for each and successfully sold one. His background is information security and entered the AI market by building large scale simulations for computer games.

Buttontwitter Buttonlinkedin

Jason Cassidy

Jason Cassidy, Sightline Innovation

The Commercialisation of Deep Learning

Jason is an MD who left medical practise to drive the scientific effort at Sightline and the application of machine learning to microbiology. He was the driving force behind the adaptation of Sightline’s manufacturing products for nano-sensing and biosecurity, and his background as a physician is also helping shape the future applications in medical diagnostics.

Buttontwitter Buttonlinkedin

14:30

David Plans

David Plans, BioBeats

Machine Intelligence for the Essential Self

Machine Intelligence for the Essential Self

At BioBeats, we're working on projects with AXA, Microsoft and BUPA that help people be well, fight stress, and be more productive. In most of these projects, deep learning approaches are taken to train models that can classify, predict and illuminate behaviour from the person's body and actions. Most of our classifiers learn from smartphone sensors, but increasingly our algorithms ingest from wearable sensors such as the Microsoft Band, Apple Watch, and upcoming projects from Google and Samsung. Our approach to building machine-learning-driven applications learns from evidence-based psychosocial intervention practices in mental health, but embodies continuous cardiovascular, skin, and movement-based sensor data in order to arrive at profound but granular insight for the individual, and their care or employer circle.

Dr David Plans is a member of the University of Surrey’s Center for Digital Economy and Center for Vision, Speech and Signal Processing, and is working towards machine learning solutions to foster human wellbeing. His primary research focus is adaptive media and affective modelling. Having worked on early mHealth projects in the NHS, he is now leading smartphone and wearable research projects at BUPA, AXA/PPP, and Microsoft Health with his startup, BioBeats, where they are helping actuarial and care provision teams think differently about preventative health.

Buttontwitter Buttonlinkedin

Ekaterina Volkova-Volkmar

Ekaterina Volkova-Volkmar, Bupa

Deep Learning for Digital Health

Bupa’s purpose is longer, healthier, happier lives. Digital interventions are a key element in delivering on our promise, with their potential to have positive impact on millions of lives. Personalised approaches towards behaviour change can be more effective and deliver a better user experience. Central to a personalised program is delivering interventions at the right time for each user. As such, using time series data to analyse and predict human behaviour is a natural choice.

We discuss the opportunities and challenges in applying deep learning techniques to time based behaviour data, to improve effectiveness of digital health interventions.

Ekaterina Volkova-Volkmar is a researcher at Bupa, London, UK. She finished her PhD at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, in 2014. With research background in neuroscience, computer science, and computational linguistics, Ekaterina is interested in integrating deep learning methods into digital solutions for behaviour change. Her current focus is on developing intelligent digital coaching services to help people improve their lifestyles and prevent diseases. More broadly, her research aims to bring human-computer interaction to a new level of naturalness and utility by using adaptable and context-aware approaches to the analysis of human behaviour.

Buttonlinkedin

15:10

Alex Matei

Alex Matei, Nuffield Health

Deep Learning for Digital Health

Alex is a Digital Health Manager at Nuffield Health. With an academic background in Software Engineering at University College London, he is now working on embedding machine learning into health prevention and wellbeing services. Within Nuffield, he advocates personalisation and tailoring across the customer journey. To improve health outcomes, Alex is investigating how behaviour change techniques can be amplified though artificial intelligence.

Buttontwitter Buttonlinkedin

15:30

Marius Cobzarenco

Marius Cobzarenco , re:infer

Building Conversational Interfaces with Deep Nets

Learning Semantic Representations for Chat

Businesses are increasingly talking to their customers using instant messaging and social media. These conversations contain valuable insight into the underlying causes of user behaviour. However, this channel is significantly underused because of the challenges posed by analysing informal, poorly spelt, user generated text. Traditional natural language processing is based on rules and hand engineered features which makes it too inflexible. On-boarding new languages requires human expert knowledge making it also prohibitively expensive.

In this talk, I will describe the approach we developed based on unsupervised training of deep neural networks that map sentences to a fixed size semantic representation. The model reads language character by character -- important in order to understand user-generated content and to recognise out-of-vocabulary terms. We train these language models on billions of sentences in an unsupervised fashion (no need for data annotated by humans). The semantic representations learnt are invariant to rephrasing as long as the meaning is left unchanged. The language models form the basis of our chat analytics and automation products.

I believe artificial intelligence will improve most aspects of our lives in the next decade. AI is already "eating the world"​ today. In particular, I am interested in how emerging technologies such as deep learning can be used to build frictionless natural language interfaces. To this end I co-founded re:infer. I'm an old-fashioned hacker with strong understanding of machine learning and its proxy fields such as probability theory, statistical modelling, linear algebra and multivariate calculus. Academically, my interests are in probabilistic generative models of language.

Buttontwitter Buttonlinkedin

Alison B Lowndes

Alison B Lowndes, NVIDIA

Compère

Deep Learnings Impact on Modern Life

The 60 year old research field, within Artificial Intelligence, has recently exploded across both media and academia. Research breakthroughs are now filtering into almost every facet of human life; commercial and personal. What was apparently sci-fi – machines that can see, hear and understand the world around them – is fast becoming the norm, on a grand scale. We take a closer look at the reality of the perfect storm created by society’s big data and NVIDIA’s GPU computational power.

Deep Learning Solutions Architect and Community Manager EMEA. Very recent mature graduate in Artificial Intelligence (University of Leeds), combining technical and theoretical computer science with a physics background. Completed a very thorough empirical study of deep learning, specifically with GPU technology, covering the entire history and technical aspects of GPGPU with underlying mathematics. 25+ years in international project management and entrepreneurship, Founder Trustee of a global volunteering network (in her spare time) and two decades spent within the internet arena, provide her a universal view of any problem.

Buttontwitter Buttonlinkedin

Davide Morelli

Davide Morelli, BioBeats

Machine Intelligence for the Essential Self

At BioBeats, we're working on projects that help people be well, fight stress, and be more productive. In most of these projects, deep learning approaches are taken to train models that can classify, predict and illuminate behaviour from the person's body and actions. Most of our classifiers learn from smartphone sensors, but increasingly our algorithms ingest from wearable sensors such as the Microsoft Band, Apple Watch, and upcoming projects from Google and Samsung. Our approach to building machine-learning-driven applications learns from evidence-based psychosocial intervention practices in mental health, but embodies continuous cardiovascular, skin, and movement-based sensor data in order to arrive at profound but granular insight for the individual, and their care or employer circle.

Researcher and entrepreneur, Davide leads BioBeats’ engineering team as CTO. He is a specialist in the intersection between Artificial Intelligence and music, previously ran a distributed software consultancy company in Italy for ten years. His PhD in Computer Science focuses on models that discover latent variables in performance profiles.

Buttontwitter Buttonlinkedin

16:30

Alejandro Jaimes

Alejandro Jaimes, Acesio

Machine Learning for Medication Adherence

Alejandro (Alex) Jaimes is CTO & Chief Scientist at Acesio. Acesio focuses on Big Data for predictive analytics in Healthcare to tackle disease at worldwide scale, impacting individuals and entire populations. We use Artificial Intelligence to collect and analyze vast quantities of data to track and predict disease in ways that have never been done before- leveraging environmental variables, population movements, sensor data, and the web. Prior to joining Acesio, Alex was CTO at AiCure and prior to that he was Director of Research/Video Product at Yahoo where he led research and contributions to Yahoo's video products, managing teams of scientists and engineers in New York City, Sunnyvale, Bangalore, and Barcelona. His work focuses on Machine Learning, mixing qualitative and quantitative methods to gain insights on user behavior for product innovation. He has published widely in the top-tier conferences (KDD, WWW, RecSys, CVPR, ACM Multimedia, etc), has been a visiting professor (KAIST), and is a frequent speaker at international academic and industry events. He is a scientist and innovator with 15+ years of international experience in research leading to product impact (Yahoo, KAIST, Telefonica, IDIAP-EPFL, Fuji Xerox, IBM, Siemens, and AT&T Bell Labs). He has worked in the USA, Japan, Chile, Switzerland, Spain, and South Korea, and holds a Ph.D. from Columbia University.

Buttontwitter Buttonlinkedin

Day 2
11:30

INVESTING IN AI

INVESTING IN AI, with Playfair Capital, Tractable, Octopus Investments and White Star Capital

BREAKOUT PANEL SESSION - Bishopsgate Room 1

The breakout panel session will explore the opportunities and challenges of investing and receiving investment for startups within rapidly advancing technology such as Artificial Intelligence. Moderated by Sally Davies from FT, panelists include Nathan Benaich, Playfair Capital; John Henderson, White Star Capital; Simon King, Octopus Investments and Alex Dalyac, Tractable.

Day 2
12:00

SPEED-MENTORING SESSION

SPEED-MENTORING SESSION, with Qualcomm Ventures, Seedrs, Capital Enterprise, L Marks, Frontline Ventures and more

BREAKOUT SESSION: Pre-selected participants only

Pre-selected startups will have 10 minutes with up to six leading investors and advisors during this Speed-Mentoring session. Confirmed participants include PlayFair Capital, Capital Enterprise, Qualcomm Ventures, L Marks, Seedrs, White Star Capital, Octopus Investments, Frontline Ventures, Innovate UK, AngelLab, Accel Partners and more.

Applications are now closed.

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium