REGISTRATION & LIGHT BREAKFAST
THEORY & LANDSCAPE
Fabrizio Silvestri - Facebook
Embeddings in the Real World: Two Case Studies
We present two novel embedding mechanisms that are derived for two particular applications in search: spellchecker and queries. The two novel embeddings are designed to improve the quality of the underlying services with the constraint of not increasing the computational time necessary to process queries. We show how we tackled this problem along with some preliminary results on test datasets.
Fabrizio Silvestri is a Software Engineer at Facebook London in the Search Systems team. His interests are in web search in general and in particular his specialization is building systems to better interpret queries from search users. Prior to Facebook, Fabrizio was a principal scientist at Yahoo where he has worked on sponsored search and native ads within the Gemini project. Fabrizio holds a Ph.D. in Computer Science from the University of Pisa, Italy where he studied problems related to Web Information Retrieval with particular focus on Efficiency related problems like Caching, Collection Partitioning, and Distributed IR in general.
Agata Lapedriza - Universitat Oberta de Catalunya & MIT Media Lab
Emotion recognition from images
Over the past decade we have observed an increasing interest in developing technologies for automatic emotion recognition. The automatic recognition of emotions has many of applications in environments where machines need to collaborate with humans. Most of the research on emotion recognition from images has been focusing on faces, and currently there are already commercial software for emotion recognition from facial expression. In this talk, I will discuss the importance of analyzing scenes, in addition to faces, in order to better recognize emotions and will motivate how emotion recognition can be approached from a scene understanding perpective.
Agata Lapedriza is a Professor at the Universitat Oberta de Catalunya. She received her MS deegree in Mathematics at the Universitat de Barcelona and her Ph.D. degree in Computer Science at the Computer Vision Center, at the Universitat Autonoma Barcelona. She was working as a visiting researcher at the Computer Science and Artificial Intelligence Lab, at the Massachusetts Institute of Technology (MIT), from 2012 until 2015. Currently she is also a visiting researcher at the MIT Medialab, at the Affective Computing group. Her research interests are related to image understanding, scene recognition and characterization, and affective computing
Raia Hadsell - DeepMind
Deep Reinforcement Learning in Complex Environments
Where am I, and where am I going, and where have I been before? Answering these questions requires cognitive navigation skills--fundamental skills which are employed by every intelligent biological species to find food, evade predators, and return home. Mammalian species, in particular, solve navigation tasks through integration of several core cognitive abilities: spatial representation, memory, and planning and control. I will present current research which demonstrates how artificial agents can learn to solve navigation tasks through end-to-end deep reinforcement learning algorithms which are inspired by biological models. Further, I will show how these agents can learn to traverse entire cities by using Google Street View, without ever using a map.
Raia Hadsell, a senior research scientist at DeepMind, has worked on deep learning and robotics problems for over 10 years. Her thesis on Vision for Mobile Robots won the Best Dissertation award from New York University, and was followed by a post-doc at Carnegie Mellon's Robotics Institute. Raia then worked as a senior scientist and tech manager at SRI International. Raia joined DeepMind in 2014, where she leads a research team studying robot navigation and lifelong learning.
James Parr - NASA Frontier Development Lab
AI for space exploration: the first four years of FDL with NASA and ESA.
Over the past four years, NASA’s Frontier development lab has learnt how to apply a broad AI toolbox to challenges in space exploration for the benefit of humanity - such as defending our planet from Asteroids, Comets and Space Weather as well as improving autonomous robotic exploration of the Moon and broadening our ability to detect exoplanets and understand extraterrestrial biospheres. In 2018, FDL also ran in Europe in association with ESA, applying AI to terrestrial challenges in disaster response and mapping informal settlements. In this talk, the Director of FDL will talk about the patterns that are emerging in AI application and the potential on the horizon. James is the founder and CEO of Trillium Technologies - a technology contractor that specialises in the application of emerging technologies to grand challenges, such as climate change, violent extremism, prevention strategies for cancer and obesity, deforestation mitigation, climate resilience and planetary defence from asteroids.
He is Director of NASA’s Frontier Development Lab (FDL) an AI research accelerator based in Silicon Valley and FDL Europe, in partnership with ESA. He is also founder of the Open Space Agency (OSA) - which is dedicated to democratisation of space exploration through citizen science and open hardware.
He lives in London with his wife and twin daughters.
MULTIMODAL INFORMATION PROCESSING
Qiang Huang - University of Surrey
Synthesis of Images by Two-Stage Generative Adversarial Networks
We proposed a divide-and-conquer approach using two generative adversarial networks (GANs) to explore how a machine can draw color pictures (bird) using a small amount of training data. In our work, we simulated the procedure of an artist drawing a picture, where one begins with drawing objects’ contours and edges and then paints them different colors. We adopted two GAN models to process basic visual features including shape, texture and color. We used the first GAN model to generate object shape, and then painted the black and white image based on the knowledge learned using the second GAN model. We ran our experiments on 600 color images. The experimental results showed that the use of our approach can generate good quality synthetic images, comparable to real ones.
Dr. Qiang Huang is now working as a senior researcher in Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey. In the last twelve years, he has worked in several fields, speech recognition, speech understanding, natural language processing, information retrieval, and audio/vision processing for sport video analysis, and has developed several systems for intelligent call routing, interactive information retrieval, tennis game analysis, and user-based dialogue system using audio, visual and text information. His research is now focused on multimodal information processing using deep neural networks.
SEARCH & REASONING
Adam Kosiorek - University of Oxford & DeepMind
Attention Mechanisms: Knowing Where to Look Improves Visual Reasoning
The human visual cortex uses attention mechanisms to discard irrelevant information as well as to efficiently allocate computational resources. It has inspired modern machine learning, where attention mechanisms are a vital part of memory modules and are used for modelling object interaction as well as solving complex reasoning tasks. In this talk, we explore attention mechanisms for visual tasks and show how they can help to track objects in real-world videos when used in a recurrent framework with a hierarchy of attention mechanisms. Attention is also able to model common assumptions we make about objects - that they do not appear out of nowhere and do not disappear into thin air. This insight lends itself to unsupervised detection and tracking of multiple objects - without any human supervision.
Adam R. Kosiorek is a PhD candidate at the University of Oxford. His research interests lie on the intersection of deep learning and machine reasoning, with the goal of achieving artificial general intelligence. Over the last five years Adam has worked on various applied and research machine learning projects at IBM, Samsung, Bloomberg and he is now a research intern at Google DeepMind. In his free time, Adam reads lots of books, trains gymnastics and is a hiking enthusiast.
Ali Eslami - DeepMind
Neural Scene Representation and Rendering
Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.
- Ali Eslami is a staff research scientist at DeepMind. His research is focused on getting computers to learn generative models of images that not only produce good samples but also good explanations for their observations. Prior to this, he was a post-doctoral researcher at Microsoft Research in Cambridge. He did his PhD in the School of Informatics at the University of Edinburgh, during which he was also a visiting researcher in the Visual Geometry Group at the University of Oxford.
NATURAL LANGUAGE PROCESSING
Angel Serrano - Santander UK
Benefits of Deep Learning in the Banking Industry
Description of types of use cases in the banking sector where deep learning techniques produced better results than ensemble techniques.
Angel is currently running the Data Science team at Santander UK. Previously he was 13 years working at PwC where he built and led one of the data and analytics teams, providing analytics, data science and business intelligence solutions for financial services clients. His background also includes IT risk, architecture and data security management. Angel is MBA, CISA, CISM and CRISK.
Lucia Specia - University of Sheffield
A Picture is Worth a Thousand Words: Towards Multimodal, Multilingual Context Models
In Computational Linguistics, work towards understanding or generating language has been primarily based solely on textual information. However, when we humans process a text, be it written or spoken, we also take into account cues from the context in which such a text appears, in addition to our background and common sense knowledge. This is also the case when we translate text. For example, a news article will often contain images and may also contain a short video and/or audio clip. Users of social media often post photos and videos accompanied by short textual descriptions. The additional information can help minimise ambiguities and elicit unknown words. In this talk I will introduce a recent area of research that addresses the automatic translation of texts from rich context models that incorporate multimodal information, focusing on visual cues from images. I will cover work analysing how humans perform translation in the presence/absence of visual cues and then move on to datasets and computational models -- based on deep learning -- that have been proposed for this problem. I will conclude by highlighting the opportunities and challenges that deep learning brings to this area.
Dr. Lucia Specia is Professor of Natural Language Processing at Imperial College London (since 2018) and the University of Sheffield (since 2012). Her research focuses on various aspects of data-driven approaches to language processing, with a particular interest in multimodal and multilingual context models and work at the intersection of language and vision. Her work has been applied to various tasks such as machine translation, image captioning, quality estimation and text adaptation. She is the recipient of the MultiMT ERC Starting Grant on Multimodal Machine Translation (2016-2021) and is currently involved in other funded research projects on machine translation (H2020 Bergamot, APE-QUEST), multilingual video captioning (British Council MMVC) and text adaptation (H2020 SIMPATICO). She was previously involved in 10+ funded research projects and has completed the supervision of 11 PhD students. In the past she worked as Senior Lecturer at the University of Wolverhampton, UK (2010-2011), and research engineer at the Xerox Research Centre, France (2008-2009). She received a PhD in Computer Science from the University of São Paulo, Brazil, in 2008. She has published 150+ research papers in peer-reviewed journals and conference proceedings.
DEEP LEARNING IN RETAIL
Tom Szumowski - Urban Outfitters
Automated fashion product attribution: A case study in using custom vision services vs. manually developed machine learning models
Many providers offer AI-based vision callabilities that perform common tasks like facial recognition, object detection, and text extraction; but for more specialized computer vision applications, companies have historically had to build their own machine learning models from scratch. Training, testing and deploying these models requires machine learning and software engineering expertise that many companies do not possess. In the last few years, custom vision services have become available that promise to democratize computer vision by automating the creation, evaluation, and deployment of these models. Google AutoML Vision, Microsoft Azure Custom Vision Service, Clarifai Custom Models, and IBM Watson Visual Recognition Custom Models all offer such services. In this talk, we will present a case study in using custom vision services to automate the attribution of fashion products, e.g. dress neckline, length, pattern. We will discuss the benefits and challenges associated with using custom vision services, and compare performance of these services to our own custom-built models.
Tom Szumowski is a Data Scientist at URBN, a portfolio of global consumer brands comprised of Urban Outfitters, Anthropologie, Free People, BHLDN, Terrain and the Vetri Family with total annual sales over 3.5 Billion dollars. Tom’s work seeks to applying machine learning algorithms to drive business value in a wide range of applications, from logistics optimization and fraud detection to product recommendations and personalization. Prior to joining URBN, Tom spent eleven years at Lockheed Martin, where he developed conventional and ML-based algorithms for various military applications. Tom holds a B.S. from Rutgers University and a M.S. from University of Pennsylvania, both in electrical engineering.
Jian Li - Sky
Content Discovery with Semantic Flows
Recommendation system is a powerful tool to increase customer engagement and satisfaction for media industry. Existing approaches rely on measuring similarity between contents and similarity between customers. In this talk, I am introducing Sky’s patent-pending machine learning research by looking at the recommendation problem from a different angle: what is the benefit of learning dissimilarity? In particular, I will introduce semantic flow, a brand-new approach, to measure semantic distance between concepts and its capability on recommending exciting and surprising contents to customers.
Jian Li is principal data scientist and data science team manager in Sky. His team focuses on machine learning research for Sky’s content discovery products and services including search, recommendation, data supply and enrichment. His team’s research covers various topics including deep learning, natural language processing, ranking, semantic knowledge inference and supply chain optimisation. Before joining Sky, Jian was with Microsoft Research in Cambridge where he developed the personalised email classification product for Microsoft Exchange and Office 365 services. This product has been used by 50 million customers globally by the end of 2015. Jian holds a Ph.D in computer vision and experimental psychology from University of Bristol.
ADDRESSING CHALLENGES IN DEEP LEARNING
Dominic Masters - Graphcore
Revisiting Small Batch Training for Deep Neural Networks
The size of the batches used for stochastic gradient descent (or its variants) is one of the principal hyperparameters that must be considered when training deep neural networks. Often very large batches are used to induce a large degree of parallelism and achieve higher throughput on todays hardware. But what is the cost to training performance? This work investigates the effect of batch size on training modern deep neural network and shows that smaller batches improve the stability of training and achieve better test performance.
Dominic is a Research Engineer at Graphcore focusing on understanding and improving the fundamental learning algorithms used for deep neural network training. He did his undergraduate degree and Masters in Mathematics followed by a PhD applying optimization methods to aerodynamic design.
Taco Cohen - Qualcomm Research Netherlands
Accelerating algorithmic and hardware advancements for power efficient on-device AI
Artificial Intelligence (AI), specifically deep learning, is revolutionizing industries, products, and core capabilities by delivering dramatically enhanced experiences. However, the deep neural networks of today are growing quickly in size and use too much memory, compute, and energy. Plus, to make AI truly ubiquitous, it needs to run on the end device within a tight power and thermal budget. One approach to address these issues is Bayesian deep learning. This talk will discuss:
• Why AI algorithms and hardware need to be energy efficient
• How Bayesian deep learning is making neural networks more power efficient through model compression and quantization
• How we are doing fundamental research on AI algorithms and hardware to maximize power efficiency
Taco Cohen is a machine learning research scientist at Qualcomm Research Netherlands and a PhD student at the University of Amsterdam, supervised by prof. Max Welling. He was a co-founder of Scyfer, a successful deep learning services company, acquired by Qualcomm in 2017. He holds a BSc in theoretical computer science from Utrecht University and a MSc in artificial intelligence from the University of Amsterdam (both cum laude). His research is focused on understanding and improving deep representation learning, in particular learning of equivariant and disentangled representations, data-efficient deep learning, learning on non-Euclidean domains, and applications of group representation theory and non-commutative harmonic analysis. He has done internships at Google Deepmind (working with Geoff Hinton) and OpenAI. He received the 2014 University of Amsterdam thesis prize and a Google PhD Fellowship.
Janahan Ramanan - Borealis AI
Event Prediction with Deep Learning
Sequence and event prediction has been one of the most directly applicable forms of Machine learning in real world environments. Just in financial services event prediction has been applied to tasks ranging from predicting the markets, to predicting customer actions. This talk will be focused on the methods and approaches to framing an event prediction task. The talk will highlight how to deal with tasks that have long sequence lengths, and asynchronous events.
Janahan Ramanan is an Applied Research Lead at Borealis AI. His team focuses on sequence learning and event prediction in real world scenarios. Janahan has a Masters in applied sciences from the University of Toronto. Previous experience includes Machine Learning engineer at Google, and a co-founder for indoor location services company Smart Indoor.
CONVERSATION & DRINKS
Antonia Creswell - Imperial College London
Iterative Approach To Improving Sample Generation
Generative modelling allows us to synthesise new data samples, whether the samples be used for imagining new concepts, augmenting datasets or better understanding our datasets. When synthesising samples, generative models don’t always get it right first time - the samples may not be sharp, may have artefacts or may be nonsensical. I will present recent research that shows how we can improve these samples, by applying an iterative procedure to get slightly better samples with each step.
Antonia is a PhD candidate at Imperial College London, in the Bio-Inspired Computer Vision Group. Her research focuses on unsupervised learning and generative models. She received her masters in Biomedical Engineering from Imperial College London with an exchange year at the University of California, Davis. Antonia has interned at DeepMind, Twitter (Magic Pony), Cortexica and UNMADE.
Seena Rejal - 3D Industri.es
Shape Intelligence: The Missing Link in Spatial Computing
As spatial computing rapidly evolves, and as 3D data proliferates exponentially, a core missing component of the technology and application landscape has been intelligent understanding of physical shapes and objects. While reconstruction, visualisation and detection are advancing in leaps and bounds, semantic recognition, matching and analysis of shapes remains work-in-progress. This talk will share the vision of 3D Industries in this domain and how, alongside its industry partners, the company is developing proprietary technologies and techniques to deliver powerful solutions across various sectors.
Seena Rejal is Founder and CEO of 3D Industri.es (3DI). His work has been featured in Forbes, Inc.com and the BBC and he has presented internationally at SXSW, WebSummit, CES, DLD and the Hanover Messe, amongst others.
Prior to this, Seena was active in the cleantech space, working with groundbreaking and revolutionary firms in the carbon capture and solar energy fields. Preceding that, he was head of technology for the acclaimed Clinton Climate Initiative (CCI), President Clinton’s programme on mitigating climate change. He worked closely with the global research and technology communities, and coordinated with policy and finance leaders to support accelerated deployment of the most high-impact solutions.
Seena holds Master's and Doctoral degrees from the University of Cambridge where he was supervised by Prof. Sir Mike Gregory.
Flora Tasse - Selerio
Semantic descriptors for 3D object understanding in AR
We present a method for jointly analyzing images, 3D objects and text, to generate a unified semantic descriptor that captures the shape and class of objects from color images. This creates an AI agent that predicts semantic and geometric scene data from the physical world as humans do. We show how such a system is used at Selerio to pull physical objects into the virtual world for Augmented Reality applications.
Flora is co-founder of the AR startup Selerio, and a recent Ph.D. graduate in Computer Vision at the University of Cambridge. Her publications in several top-tier venues cover topics at the intersection of Graphics, Vision, and NLP such as sketch-based modeling or joint analysis of images and text for 3D retrieval. She also holds the 2014 Google European Doctoral Fellowship in Computer Graphics for her work on retrieving 3D models using images and sketches. Selerio, a Cambridge spin-out, builds on this work to provide developers with live 3D reconstruction and editing of real scenes, for more engaging AR experiences. Selerio is backed by investors such as Entrepreneur First and Betaworks.
Kerem Sozugecer - DeepZen
The Future of Audiobooks
DeepZen focuses on using neural networks to produce audiobooks and voiceovers with humanlike speech quality while reducing the cost and production time.
Kerem is the CTO and co-founder of DeepZen which specializes in building end to end AI solutions that generate emotional and expressive synthetic voice that’s indistinguishable from humans.
Kerem has been an entrepreneur for the majority of his career. He has extensive experience in machine learning and more recently, Artificial Intelligence. He also took management roles at corporation like Oracle and GE.
He has a BS in Information Technology from Rensselaer Polytechnic Institute in Troy, New York and currently resides in London.
Kamel Nebhi - AIEVE - PECULIUM
Enhancing Cryptocurrency Forecasting by using Deep Learning Sentiment Analysis
Peculium is the first Crypto-Savings platform that combines traditional savings, blockchain technology, cryptocurrency, and artificial intelligence. Indeed, Peculium is a platform that makes use of the AIEVE Artificial Intelligence technology to forecast the market price of several cryptocurrencies and giving real-time saving-portfolios advices.
In this context, AIEVE is based on cutting-edge NLP techniques to extract semantic meaning and sentiment from large volumes of unstructured text from multiple sources such as social media or RSS feeds. In this presentation, we will present a Twitter sentiment analysis pipeline based on CNNs and LSTMs networks using fine-tuned word embeddings. We will show how these techniques help AIEVE to predict the cryptocurrency market with a higher level of accuracy to increase users savings.
Prior to joining AIEVE, Kamel was working as a Senior NLP Data Scientist at First Utility developing AI technology to improve the customer experience. Kamel has also worked at Oxford University Press as a Lead Language Technologist for the Oxford English Dictionary project. During his Ph.D at the University of Geneva in Switzerland, he has been working on several topics such as NLP, Machine Learning and Semantic Web.
Chris Fregly - PipelineAI
End-to-End Continuous Machine Learning in Production with PipelineAI, Spark ML, TensorFlow AI, PyTorch, Kafka, TPUs, and GPUs
Traditional machine learning pipelines end with life-less models sitting on disk in the research lab. These traditional models are typically trained on stale, offline, historical batch data. Static models and stale data are not sufficient to power today's modern, AI-first Enterprises that require continuous model training, continuous model optimizations, and lightning-fast model experiments directly in production. Through a series of open source, hands-on demos and exercises, we will use PipelineAI to breathe life into these models using 4 new techniques that we’ve pioneered: * Continuous Validation (V) * Continuous Optimizing (O) * Continuous Training (T) * Continuous Explainability (E).
The Continuous "VOTE" techniques has proven to maximize pipeline efficiency, minimize pipeline costs, and increase pipeline insight at every stage from continuous model training (offline) to live model serving (online.) Attendees will learn to create continuous machine learning pipelines in production with PipelineAI, TensorFlow, and Kafka.
Chris Fregly is Founder at PipelineAI, a Real-Time Machine Learning and Artificial Intelligence Startup based in San Francisco. He is also an Apache Spark Contributor, a Netflix Open Source Committer, founder of the Global Advanced Spark and TensorFlow Meetup, author of the O’Reilly Training and Video Series titled, "High Performance TensorFlow in Production with Kubernetes and GPUs." Previously, Chris was a Distributed Systems Engineer at Netflix, a Data Solutions Engineer at Databricks, and a Founding Member and Principal Engineer at the IBM Spark Technology Center in San Francisco.
DEEP LEARNING APPLIED
Marc Huertas-Company - Observatoire de Paris
Exploring galaxy evolution with deep learning
We are currently living an exciting epoch. Thanks to rapidly improving technology, astronomy is entering the big data era. New NASA/ESA surveys that will be available in 2-5 years will contain multi-wavelength images of billions of galaxies and spectra for many tens of millions (e.g. EUCLID, WFIRST). The increase of computing power has also enabled us to run hydrodynamic numerical simulations that incorporate our knowledge of physics in a cosmological context and produce large amount of simulated data, spanning most of the Universe’s life. Deep learning appears as an unavoidable solution to analyze the huge volume of data available to the community. But not only that, it also brings new opportunities to find new observables and tighten the link between theory and observations. In the last years our group has pioneered the use of deep learning techniques in astronomy. I will review some of our key results and their impact in our understanding of how galaxies form and evolve.
I am an associate professor at the Observatoire de Paris and Université Paris Diderot. I am a recognized expert in the fields of galaxy morphology and massive galaxy formation I defended my PhD in 2009 and after a short postdoctoral experience of less than a year I obtained a permanent position at the Paris Observatory, being the youngest scientist hired at the institution in the last 10 years. I was one of the first researchers in applying classical machine learning techniques to the classification of galaxy morphologies during my PhD back in 2008. When I started working on this topic, the presence of such techniques in astronomy was marginal. Since 2013, I have been pioneering the use of deep learning techniques to improve our understanding of galaxy evolution.
Rob Otter - Barclaycard
Data Technologies & Applied ML - ML Pipeline
Rob’s career history to date has been within the Investment Banking Technology sector. He has held various senior positions working either for centralised IT organizations or directly for key business lines. He has held fellowship titles at his last two employers, Credit Suisse and Goldman Sachs. During his career he has acquired deep technical knowledge and expertise on areas ranging from performance development methods in software or hardware, more recently his focus has turned to big data technologies and their application within the Investment Banking arena, specifically artificial intelligence and scientific computing
Rob is currently a technical Managing Director at Barclays International where he holds the position of Global Head of the Data Technologies and Applied Machine Learning Team.
Cedric Archambeau - Amazon
Learning Representations for Hyperparameter Transfer Learning
Bayesian optimization (BO) is a model-based approach for gradient-free black-box function optimization, such as hyperparameter optimization, critical in deep learning. Typically, BO relies on conventional Gaussian process regression, whose algorithmic complexity is cubic in the number of evaluations. As a result, Gaussian process-based BO cannot leverage large numbers of past function evaluations, for example, to warm-start related BO runs. After a brief intro to BO and an overview of several use cases at Amazon, I will discuss a multi-task adaptive Bayesian linear regression model, whose computational complexity is attractive (linear) in the number of function evaluations and able to leverage information of related black-box functions through a shared deep neural net. Experimental results show that the neural net learns a representation suitable for warm-starting related BO runs and that they can be accelerated when the target black-box function (e.g., validation loss) is learned together with other related signals (e.g., training loss). The proposed method was found to be at least one order of magnitude faster than competing neural net-based methods recently published in the literature.
Cedric is the science lead of Amazon Core AI, with teams in Berlin, Barcelona, Tuebingen, and Seattle. His work on democratizing machine learning enables teams at Amazon deliver a wide range of machine learning-based products, including customer facing services such as Amazon SageMaker (aws.amazon.com/sagemaker). Currently, he is interested in algorithms that learn representations, algorithms that learn to learn, and algorithms that avoid catastrophic forgetting (in deep learning). Prior to joining Amazon, he led the Machine Learning group at Xerox Research Centre Europe (now Naver Labs Europe). His team conducted applied research in machine learning, computational statistics and mechanism design, with applications in customer care, transportation and governmental services. He joined Amazon, Berlin, as an Applied Science Manager in October 2013, where he was in charge of delivering zero-parameter machine learning algorithms.
DEEP LEARNING TO PREVENT RISK
Eli David - Deep Instinct
End-to-End Deep Learning for Detection, Prevention, and Classification of Cyber Attacks
With more than a million new malicious files created every single day, it is becoming exceedingly difficult for currently existing malware detection methods to detect most of these new sophisticated attacks. In this talk, we describe how Deep Instinct uses an end-to-end deep learning based approach to effectively train its brain on hundreds of millions of files, and thus providing by far the highest detection and prevention rates in the cybersecurity industry today. We will additionally explain how deep learning is employed for malware classification and attribution of attacks to specific entities.
Dr. Eli David is a leading expert in the field of computational intelligence, specializing in deep learning (neural networks) and evolutionary computation. He has published more than thirty papers in leading artificial intelligence journals and conferences, mostly focusing on applications of deep learning and genetic algorithms in various real-world domains. For the past ten years, he has been teaching courses on deep learning and evolutionary computation, in addition to supervising the research of graduate students in these fields. He has also served in numerous capacities successfully designing, implementing, and leading deep learning based projects in real-world environments. Dr. David is the developer of Falcon, a grandmaster-level chess playing program based on genetic algorithms and deep learning. The program reached the second place in World Computer Speed Chess Championship. He received the Best Paper Award in 2008 Genetic and Evolutionary Computation Conference, the Gold Award in the prestigious "Humies" Awards for Human-Competitive Results in 2014, and the Best Paper Award in 2016 International Conference on Artificial Neural Networks. Currently Dr. David is the co-founder and CTO of Deep Instinct, the first company to apply deep learning to cybersecurity. Recently Deep Instinct was recognized by Nvidia as the "most disruptive AI startup".
ETHICS & REGULATION OF DEEP LEARNING IN PRACTICE
PANEL: Regulation And Global Policy - AI and Autonomous Systems
Jade Leung - Governance AI Program, Future of Humanity Institute
Jade is a researcher with the Governance of Artificial Intelligence Program (GovAI) at the Future of Humanity Institute (University of Oxford). Her research focuses on the governance of emerging dual-use technologies, with a specific focus on firm-government relations in the US and China with respect to advanced artificial intelligence. Jade has a background in engineering, international law, and policy design and evaluation.
Alison Hall - PHG Foundation/University of Cambridge
Alison leads the Humanities work at the PHG Foundation, a health policy think tank which is part of University of Cambridge. Her research focuses on the regulation and governance of genomic data for clinical care and research, the impact of automated processing and artificial intelligence on existing legal and ethical frameworks, and the challenges and opportunities associated with delivering personalised healthcare. Alison has professional qualifications in law and nursing and a masters qualification in healthcare ethics.
Andrea Renda - Centre for European Policy Studies
Andrea Renda is an Italian social scientist, whose research lies at the crossroads between economics, law, technology and public policy. He is Senior Research Fellow and Head of Global Governance, Regulation, Innovation and the Digital Economy at the Centre for European Policy Studies (CEPS). From September 2017, he holds the Chair for Digital Innovation at the College of Europe in Bruges (Belgium) where he has also leading the course “Regulatory Impact Assessment for Business” since 2007. He is also a non-resident fellow at Duke University's Kenan Institute for Ethics. Over the past two decades, he has provided academic advice to several institutions, including the European Commission, the European Parliament, the OECD, the World Bank and several national governments around the world. An expert in technology policy and better regulation, he is a member of the ESIR (Economic and Social Impacts of Research) expert group of the European Commission; and a member of the EU Blockchain Observatory and Forum. He is also a member of the Editorial Board of the international peer-reviewed journals “Telecommunication Policy” (Elsevier) and of the European Journal of Risk Regulation (Lexxion); a member of the Scientific Board of the International Telecommunications Society (ITS) and Chair of the Scientific Board of European Communications Policy Research (EuroCPR). He holds a Ph.D. degree in Law and Economics awarded by the Erasmus University of Rotterdam.
Matthew Fenech - Future Advocacy
Dr Matthew Fenech is an artificial intelligence policy consultant, with expertise in developing and advocating for policies that maximise the opportunities and minimise the risks of these technologies. His main interest is in the ethics and practicalities of the use of AI in healthcare, a field to which he brings his 10 years of experience working as a hospital doctor and clinical academic. He has also authored reports about AI and other emerging technologies in low- & middle-income countries, and on the impact of automation on the future of work. He regularly speaks about these topics in lectures and in the media.
Loubna Bouarfa - OKRA Technologies
Loubna Bouarfa is a machine learning scientist turned entrepreneur.
In 2016, after several years in academia, Loubna founded her own artificial intelligence company: OKRA Technologies. OKRA is a data analysis platform, using deep machine learning algorithms to transform complex datasets into evidence-based predictions, in real time. The platform was designed to equip Healthcare and Life Sciences professionals with the foresight to improve patient outcomes.
Before OKRA, Loubna spent over 10 years validating and implementing machine learning (ML) solutions for real-world applications, such as an autonomous ML system that tracks surgeons’ operating movements and prevents error in real time.
Loubna has won several awards and was recognised as a leading innovator by the MIT Technology Review in 2017.
Beyond her business, Loubna has recently been appointed by the European Commission as a High-Level Expert on Artificial Intelligence. She will support the EU by developing recommendations on ethical, legal and societal issues related to AI, impacting the health, safety and freedom of the wider society.
On a personal level, Loubna is a strong advocate for diversity and challenging the status quo. Having lived in Morocco, moving to the Netherlands at the age of 17, and later to the UK with a young family, she realised the power of remaining outside her comfort zone.
PANEL: Ethically Handling Data - What is Your Responsibility and What Should be the Next Step?
Alice Piterova - Hazy
Alice reviews Hazy's product features for AI ethics, data privacy and compliance and helps define Hazy's core message to the world. Prior to joining Hazy Alice coordinated the cross-party parliamentary group on AI (APPG AI), helping the UK Government to address ethical implications and design new standards for applying machine learning in commercial, political and social areas.
Alice has over 10 years of experience in policy, research, product management and marketing, and a particular focus on such fields as artificial intelligence, big data and tech for good. Having worked in national and international public and private sector organisations, social enterprises and NGOs, Alice has a proven track record in delivering the strategic vision and showcasing impact to a wide range of stakeholders.
Aimee Van Wynsberghe - TU Delft
Aimee van Wynsberghe has been working in ICT and robotics since 2004. She began her career as part of a research team working with surgical robots in Canada at CSTAR (Canadian Surgical Technologies and Advance Robotics). She is Assistant Professor in Ethics and Technology at TU Delft in the Netherlands. She is co-founder and co-director of the Foundation for Responsible Robotics, on the board of the Institute for Accountability in a Digital Age, and an advisory board member for the AI & Intelligent Automation Network. Aimee also serves as a member of the European Commission's High-Level Expert Group on AI and is a founding board member of the Netherlands AI Alliance. Aimee has been named one of the Netherlands top 400 influential women under 38 by VIVA and was named one of the 25 ‘women in robotics you need to know about’. She is author of the book Healthcare Robots: Ethics, Design, and Implementation and has been awarded an NWO personal research grant to study how we can responsibly design service robots. She has been interviewed by BBC, Quartz, Financial Times, and other International news media on the topic of ethics and robots, and is often invited to speak at International conferences and summits.
Caryn Tan - Accenture
Caryn is an Analytics Strategist operating at the intersection of applied analytics and law/ethics.
She advises senior decision-makers on analytics strategy, target operating model and analytics business case and manages technical teams to operationalise and realise these strategies. She also manages Accenture’s Responsible AI practice in the UK where she helps clients confidently deploy responsible AI models with technical, organisational, governance and brand considerations. This involves working with multidisciplinary teams, industry experts and academic institutes.
Caryn graduated from London Business School and holds a law degree from BPP University, both as a merit scholar.
END OF SUMMIT
Cansu Canca & Laura Haaber Ihle - AI Ethics Lab
This is a 2-hour workshop with a maximum of 20 places, and registration is required. To register your interest, please complete the registration form here. You must already be registered for the summit in order to attend. An email will be sent to confirm your place.
When searching for “professor” or “CEO” on Google images, the results show overwhelmingly white male pictures. While these jobs are held by white male professionals more often, the image search results present an extreme bias against representing women and people of color. This has been pointed out as an ethical problem in various outlets; however the problem persists.
In this workshop, we use this case as an example on how to structure the ethical problem at hand and its underlying principles before moving on to try solving it. Through the game-like structure of the Mapping method, the workshop will engage participants and help them develop essential tools to decide on ethical solutions that are technically feasible. Collaborating with each other, participants test the strength of their ideas and progress gradually towards creating solutions to this real-life problem as well as analyzing how their solution would hold up in other relevant cases such as voice assistant responses and other search result categories. The Mapping helps bring abstract ethical arguments to the ground—in a very literal sense, since the Mapping takes the form of a physical ground game.
Explore with the Experts - ROUND TABLE DISCUSSION
Limited spaces are available for this session. Sit down with experts in AI who are using deep learning and AI techniques to solve challenges in the healthcare sector. Offer them your thoughts on the problems they're facing in their work, get feedback on your challenges, and ask those burning questions in this in-depth networking session.
Confirmed experts include:
• Danielle Belgrave, Machine Learning Researcher, Microsoft Research
• Sarah Culkin, Strategic Data Lead, NHS England
• Daniel Leightley, Post-Doctoral Research Associate, King's Centre for Military Health Research
Questions and challenges our experts are keen to explore with you include: - How do we incorporate expert knowledge into Deep Learning? - Who is responsible when the ‘machine’ makes an incorrect decision that costs lives? - How do we build and develop trust in AI technology, especially within the healthcare sector? - Machine learning for Healthcare in Data Sparse Contexts - What advanced technologies do you think will emerge for the Health and Care system over the next decade?
Amir Saffari - BenevolentAI
Graphs are a natural way to model many real-world complex objects. In this talk, after a brief review of recent advances in using deep learning for graphs, we will present our approach to create inference models based on novel attention mechanisms for graph convolutional neural networks making them robust to noise with added interpretability. We will show their applications to very large semi-automatically generated biological networks. In addition, we will discuss our recent approaches to model and generate graphs with optimal properties using RL as well as an architecture for conditional generative graph models which are applied to create novel chemical compounds.
Caryn Tan - Accenture
How to Design AI with Human Centricity
Whether we like it or not, we are inevitably moving into a world of more autonomous decisions by AI. In fact, done right it could lead to a really exciting future.
Done right is the key.
How do we ensure that we build AI that are not marginalising groups within society?
How do we prevent negative unintended consequences such as creating larger social disparity?
This workshop will get execs and data scientists thinking about how to ensure their AI is designed with human centricity in mind. Opened to everyone, both technical and non technical.
Investing in Startups: A Founder’s Story and Meet the Investors - NETWORKING
Parker Moss, Entrepreneur-in-Residence at F-Prime Capital will be chairing this exciting session exploring investing in startups. We will start off the session with a Founder's Story interview with Eduardo Jorgensen, Founder of MedicSen; a startup dedicated to healthcare who have seen significant growth over the past year since launching GlycSen for intelligent treatment of insulin dependent diabetes. Following this, leading investors interested in deep learning across all industries will be sharing industry insights and tips on a panel discussion.
The investors are:
Dmitry Kaminskiy, Managing Partner, Deep Knowledge Ventures
John Spindler, General Partner, AI Seed
Frederic Lardieg, Partner, Octopus Ventures
Attendees will be invited to meet the investors for the remainder of the session to ask those burning questions.