REGISTRATION & LIGHT BREAKFAST
DEEP LEARNING LANDSCAPE
Tackling Complex Environments & Deep Learning
Oriol Vinyals - DeepMind
AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
Games have been used for decades as an important way to test and evaluate the performance of artificial intelligence systems. As capabilities have increased, the research community has sought games with increasing complexity that capture different elements of intelligence required to solve scientific and real-world problems. In recent years, StarCraft, considered to be one of the most challenging Real-Time Strategy (RTS) games and one of the longest-played esports of all time, has emerged by consensus as a “grand challenge” for AI research.
In this talk, I will introduce our StarCraft II program AlphaStar, the first Artificial Intelligence to defeat a top professional player. In a series of test matches held on 19 December, AlphaStar decisively beat Team Liquid’s Grzegorz "MaNa" Komincz, one of the world’s strongest professional StarCraft players, 5-1, following a successful benchmark match against his team-mate Dario “TLO” Wünsch. The matches took place under professional match conditions on a competitive ladder map and without any game restrictions.
Oriol Vinyals is a Research Scientist at DeepMind. Previously he was a member of the Google Brain team. He holds a Ph.D. in EECS from University of California, Berkeley, and a Masters degree from University of California, San Diego.
Jakob Uszkoreit - Google Brain
Learning Representations with Self-Attention
Self-attention has been shown to be an efficient way of learning representations of variable sized data like language but also images and music, competitive in quality with recurrent and convolutional neural networks. This talk will cover the basic mechanism and various extensions, interpretation of results from different applications and some outlook towards future research in this direction.
Jakob Uszkoreit leads the new Google Brain research lab in Berlin. There he works on neural network architectures for generating text, images and other modalities in tasks such as machine translation or image generation. Earlier, Jakob led a team in Google Research developing neural network models of language that learn from weak supervision at very large scale, deployed in Search, Ads and the Google Assistant. Before this, Jakob started the group that designed and implemented the semantic parser behind the Google Assistant after working on various aspects of Google Translate in its earlier years.
Ali Eslami - DeepMind
Neural Scene Representation and Rendering
Scene representation—the process of converting visual sensory data into concise descriptions—is a requirement for intelligent behavior. Recent work has shown that neural networks excel at this task when provided with large, labeled datasets. However, removing the reliance on human labeling remains an important open problem. To this end, we introduce the Generative Query Network (GQN), a framework within which machines learn to represent scenes using only their own sensors. The GQN takes as input images of a scene taken from different viewpoints, constructs an internal representation, and uses this representation to predict the appearance of that scene from previously unobserved viewpoints. The GQN demonstrates representation learning without human labels or domain knowledge, paving the way toward machines that autonomously learn to understand the world around them.
- Ali Eslami is a staff research scientist at DeepMind. His research is focused on getting computers to learn generative models of images that not only produce good samples but also good explanations for their observations. Prior to this, he was a post-doctoral researcher at Microsoft Research in Cambridge. He did his PhD in the School of Informatics at the University of Edinburgh, during which he was also a visiting researcher in the Visual Geometry Group at the University of Oxford.
Richard Turner - University of Cambridge
Dr. Richard E. Turner is a Reader in Machine Learning at the University of Cambridge and a Visiting Researcher at Microsoft Research Cambridge. His research fuses probabilistic machine learning and deep learning to develop robust, data-efficient, flexible and automated learning systems. Richard helps lead Cambridge’s renowned Machine Learning Group, the Machine Learning and Machine Intelligence MPhil, the Centre for Doctoral Training in AI for Environmental Risk, and the Cambridge Big Data Strategic Initiative. He studied for his PhD at the Gatsby Computational Neuroscience Unit at UCL and spent his Postdoctoral Fellowship at New York University in the Laboratory for Computational Vision. He has been awarded the Cambridge Students' Union Teaching Award for Lecturing and his work has featured on BBC Radio 5 Live’s The Naked Scientist, BBC World Service’s Click and in Wired Magazine.
Pierre-Yves Oudeyer - Inria
Developmental autonomous learning: AI, cognitive sciences and educational technology
Current approaches to AI and machine learning are still fundamentally limited in comparison with autonomous learning capabilities of children. What is remarkable is not that some children become world champions in certains games or specialties: it is rather their autonomy, flexibility and efficiency at learning many everyday skills under strongly limited resources of time, computation and energy. And they do not need the intervention of an engineer for each new task (e.g. they do not need someone to provide a new task specific reward function).
I will present a research program that has focused on computational modeling of child development and learning mechanisms in the last decade. I will discuss several developmental forces that guide exploration in large real world spaces, starting from the perspective of how algorithmic models can help us understand better how they work in humans, and in return how this opens new approaches to autonomous machine learning.
In particular, I will discuss models of curiosity-driven autonomous learning, enabling machines to sample and explore their own goals and their own learning strategies, self-organizing a learning curriculum without any external reward or supervision.
I will show how this has helped scientists understand better aspects of human development such as the emergence of developmental transitions between object manipulation, tool use and speech. I will also show how the use of real robotic platforms for evaluating these models has led to highly efficient unsupervised learning methods, enabling robots to discover and learn multiple skills in high-dimensions in a handful of hours. I will discuss how these techniques are now being integrated with modern deep learning methods.
Finally, I will show how these models and techniques can be successfully applied in the domain of educational technologies, enabling to personalize sequences of exercises for human learners, while maximizing both learning efficiency and intrinsic motivation. I will illustrate this with a large-scale experiment recently performed in primary schools, enabling children of all levels to improve their skills and motivation in learning aspects of mathematics. Web: http://www.pyoudeyer.com
Pierre-Yves Oudeyer is a research director at Inria and head of the FLOWERS lab at Inria and Ensta-ParisTech since 2008. Before, he has been a permanent researcher at Sony Computer Science Laboratory for 8 years (1999-2007).
He studies developmental autonomous learning and the self-organization of behavioural and cognitive structures, at the frontiers of AI, machine learning, neuroscience, developmental psychology and educational technologies. In particular, he studies exploration in large open-ended spaces, with a focus on autonomous goal setting, intrinsically motivated learning, and how this can automate curriculum learning. With his team, he pioneered curiosity-driven learning algorithms working in real world robots (used in Sony Aibo robots), and showed how the same algorithms can be used to personalize sequences of learning activitivies in educational technologies deployed at large in schools. He developed theoretical frameworks to understand better human curiosity and its role in cognitive development, and contributed to build an international interdisciplinary research community on human curiosity. He also studied how machines and humans can invent, learn and evolve speech communication systems.
He is laureate of the Inria-National Academy of Science young researcher prize in computer sciences, of an ERC Starting Grant, and of the Lifetime Achievement Award of the Evolutionary Linguistics association. Beyond academic publications and several books, he is co-author of 11 international patents. His team created the first open-source 3D printed humanoid robot for reproducible science and education (Poppy project, now widely used in schools and artistic projects), as well as a startup company. He is also working actively for the diffusion of science towards the general public, through the writing of popular science articles and participation to radio and TV programs as well as science exhibitions.
Katja Hofmann - Microsoft Research
Dr Katja Hofmann is a Senior Researcher at the Machine Intelligence and Perception group at Microsoft Research Cambridge. Her research focuses on reinforcement learning with applications in video games, as she believes that games will drive a transformation of how people interact with AI technology. She is the research lead of Project Malmo, which uses the popular game Minecraft as an experimentation platform for developing intelligent technology. Her long-term goal is to develop AI systems that learn to collaborate with people, to empower their users and help solve complex real-world problems.
Edward Grefenstette - Facebook AI Research (FAIR)
Teaching Artificial Agents to Understand Language by Modelling Reward
Recent progress in Deep Reinforcement Learning has shown that agents can be taught complex behaviour and solve difficult tasks, such as playing video games from pixel observations, or mastering the game of Go without observing human games, with relatively little prior information. Building on these successes, researchers such as Hermann and colleagues have sought to apply these methods to teach–in simulation–agents to complete a variety of tasks specified by combinatorially rich instruction languages. In this talk, we discuss some of these highlights and some of the limitations which inhibit scalability of such approaches to more complex instruction languages (including natural language). Following this, we introduce a new approach, inspired by recent work in adversarial reward modelling, which constitutes a first step towards scaling instruction-conditional agent training to “real world” language.
Edward Grefenstette is a Research Scientist at Facebook AI Research, and Honorary Associate Professor at UCL. He previously was, in reverse order, a Staff Research Scientist at DeepMind, the CTO of Dark Blue Labs, and a Junior Research Fellow within Oxford’s Department of Computer Science and Somerville College. He completed his DPhil (PhD) at the University of Oxford in 2013 under the supervision of Profs Coecke and Pulman, and Dr Sadrzadeh, working on applying category-theoretic tools–initially developed to model quantum information flow–to model compositionality of distributed representations in natural language semantics. His recent research has covered topics at the intersection of deep learning and machine reasoning, addressing questions such as how neural networks can model or understand logic and mathematics, infer implicit or human-readable programs, or learn to understand instructions from simulation.
Jens Kober - TU Delft
Learning to Interact & Interacting to Learn
The acquisition and self-improvement of novel motor skills is among the most important problems in robotics. Reinforcement learning and imitation learning are two different but complimentary machine learning approaches commonly used for learning motor skills.
I will discuss various learning techniques we developed that can handle complex interactions with the environment. Complexity arises from non-linear dynamics in general and contacts in particular, taking multiple reference frames into account, dealing with high-dimensional input data, interacting with humans, etc. A human teacher is always involved in the learning process, either directly (providing demonstrations) or indirectly (designing the optimization criterion), which raises the question: How to best make use of the interactions with the human teacher to render the learning process efficient and effective?
All these concepts will be illustrated with benchmark tasks and real robot experiments ranging from fun (ball-in-a-cup) to more applied (unscrewing light bulbs).
Jens Kober is an associate professor at the Cognitive Robotics department, TU Delft, The Netherlands. He worked as a postdoctoral scholar jointly at Bielefeld University, Germany and at the Honda Research Institute Europe, Germany. He received his PhD in 2012 from Technische Universität Darmstadt, Germany. From 2007 to 2012 he was working at the MPI for Intelligent Systems, Germany. He has been a visiting research student at the Advanced Telecommunication Research (ATR) Center, Japan and an intern at Disney Research Pittsburgh, USA. Jens is the recipient of the 2018 IEEE-RAS Early Academic Career Award in Robotics and Automation and the 2013 Georges Giralt PhD Award. Jens serves as co-chair of the IEEE-RAS TC Robot Learning.
Shreyansh Daftry - NASA JPL
Deep Learning for Space Exploration
Shreyansh Daftry is a Research Scientist at NASA Jet Propulsion Laboratory (JPL) in Pasadena, California, working at the intersection of Artificial Intelligence and Space Technology to help develop the next generation of robots for Earth, Mars and beyond. Shreyansh received his M.S. degree in Robotics from the Robotics Institute, Carnegie Mellon University, USA in 2016, and his B.S. degree in Electronics and Communications Engineering in 2013. His research interests spans computer vision, machine learning and autonomous robotics, with a focus on real-time computation, safety and adaptability.
NATURAL LANGUAGE PROCESSING
Deep Learning for Gesture Recognition within AR Interfaces
Moving from Ensemble to Deep Learning in Natural Language Processing
Neural Network based Multimodal Dialog Technologies towards Human-Robot Communication
CONVERSATION & DRINKS
Using DL To Reverse the Resolution-Degrading Effects of Conventional Video Capture
Your Voice is Pure Gold: Understand Speech Data with Deep Learning
Gerben Oostra - BigData Republic
Predicting Effectiveness of Churn Prevention Measures
Losing customers, also referred to as churning, is something that any company wants to prevent. While it is interesting to know how likely a specific customer will churn, it is more useful to know which countermeasure, such as a discount or an additional service, would prevent a user from churning. Also, instead of knowing which countermeasure works well in general, we want to determine the best countermeasure for each individual customer.
Since we are interested in the causal relationship between countermeasure and churn, we need to go beyond analyzing correlations. For example, did the customer churn because he got the wrong countermeasure, or did he get the countermeasure because he was likely to churn anyway?
Furthermore, the feedback from previous countermeasures is limited to the set of actually executed countermeasures. This phenomenon, referred to as bandit feedback, does not provide information about the alternative, potentially more effective countermeasure. As the model selects the applied countermeasure, our data only contains feedback about the predicted countermeasures. This data generation process causes very strong selection biases, and introduces an extra challenge of balancing between exploiting predicted associations and exploring unknown associations.
In this talk, I will demonstrate how to obtain unbiased predictions of the effect of countermeasures, which can be used to select the optimal countermeasures, while balancing between exploiting and exploring. This will be done by combining deep learning and bayesian modelling in a custom setup.
Gerben Oostra is a machine learning engineer that loves to solve complex problems. Originally graduated on algorithmic approaches in operations research, he switched to data science challenges about 10 years ago. Since then he has been applying data science and data engineering in various companies, amongst which ING and Vodafone/Ziggo. He therefore gained data science experience on a variety of challenges, including social network analysis, time series forecasting, anomaly (fraud) detection, natural language processing and bayesian modelling. More recently he has focused on solving churn cases, allowing him to combine his enthusiasm about both Bayesian modelling and time-series on a real life problems.
The Future of Audiobooks
DEEP LEARNING APPLIED
Jon McLoone - Wolfram Research Europe
Deep Learning & Traditional Computation Better Together
Deep learning is big news at the moment, but it’s not the only game in town. Wolfram has spent 30 years unifying the full breadth of computation from modelling to statistics and from social network analysis to machine learning. This talk will explain how a combination of approaches can reveal insights that a single approach cannot access. Examples will be taken from several business contexts and multiple domains of data.
As Director of Technical Services, Communication and Strategy at Wolfram Research Europe, Jon McLoone is central to driving the company's technical business strategy and leading the consulting solutions team. Described as “The Computation Company”, the Wolfram group are world leaders in integrated technology for computation, data science and AI including machine learning. With over 25 years of experience working with Wolfram Technologies, Jon has helped in directing software development, system design, technical marketing, corporate policy, business strategies and much more. Jon gives regular keynote appearances and media interviews on topics such as the Future of AI, Enterprise Computation Strategies and Education Reform, across multiple fields including healthcare, fintech and data science. He holds a degree in mathematics from the University of Durham. Jon is also Co-founder and Director of Development for computerbasedmath.org, an organisation dedicated to a fundamental reform of maths education and the introduction of computational thinking. The movement is now a worldwide force in re-engineering the STEM curriculum with early projects in Estonia, Sweden and Africa.
Sergei Bobrovskyi - Airbus
Industrial Time Series Anomaly Detection
Time series are ubiquitous in aerospace engineering and their processing can enable the generation of large business value. Manufacturing tools and more importantly, aerospace assets themselves produce large amounts of sensor signals, which cannot be analyzed and even captured in its totality by humans. In this talk we focus on automatic anomaly detection tasks for aircraft sensors. We assess the industrial viability of various semi-supervised anomaly detection systems based on Deep Learning for automatic discovery of point, contextual and collective anomalies on large datasets with little prior knowledge. Moreover, we present the results of a challenge with the same goals hosted by Airbus on its AIGym co-innovation platform, engaging over 150 academic and industrial teams worldwide.
Dr. Sergei Bobrovskyi is a Data Scientist within the Analytics Accelerator team of the Airbus Digital Transformation Office. His work focuses on applications of AI for anomaly detection in time series, spanning various use-cases across Airbus. Prior to Airbus he worked on automated fraud detection for one of the largest e-commerce companies in Germany. Before that he was engaged in various research related positions in the space industry.
Sergei holds a PhD in theoretical physics as well as a physics Diploma from the University of Hamburg. Besides physics he also studied philosophy with an emphasis on the philosophy of mind.
Guang Yang - Imperial College London
Augmented Intelligence for cardiovascular imaging and analysis
As an important branch of Artificial Intelligence, deep learning is showing a growing trend in data analysis and is known as one of the breakthrough technologies in recent years. It is an improvement over neural networks that includes more layers of computation to enable higher levels of abstraction and prediction in the data. So far, it is becoming the leading machine learning tool in the field of general imaging and computer vision. Undoubtedly, deep learning has also achieved some important results in the medical field, especially in the processing and analysis of medical image data. However, there is still an insurmountable gap in how to translate these results into clinical practice. In this talk, Dr Guang Yang will focus on the application of Augmented Intelligence for cardiovascular imaging and analysis, rather than using the generalised Artificial Intelligence technology. Dr Guang Yang will also explain the current technology, breakthroughs and prospects.
Dr. Guang Yang (B.Eng, M.Sc., Ph.D., M.IEEE, M.ISMRM, M.SPIE) obtained his M.Sc. in Vision Imaging and Virtual Environments from the Department of Computer Science in 2006 and his Ph.D. on medical image analysis jointly from the CMIC, Department of Computer Science and Medical Physics in 2012 both from University College London. He is currently an honorary lecturer with the Neuroscience Research Centre, Cardiovascular and Cell Sciences Institute, St. George’s, University of London. He is also an image processing physicist and honorary senior research fellow working at Cardiovascular Research Centre, Royal Brompton Hospital and also affiliate with National Heart and Lung Institute, Imperial College London. He has worked for Siemens Medical Solutions and Medicsight PLC before, and obtained comprehensive industrial experiences. He leads two international patent applications in pending in the fields of medical image processing. His research collaborators are from Imperial College London, St. George’s, University of London, Cambridge University, University of Lincoln, City University London, University College London, Fudan University, Shanghai Jiao Tong University in China and UCLA in United States. He has participated in many medical image analysis projects including breast tumour image analysis using digital breast tomosynthesis (funded by Department of Trade and Industry and EPSRC); colon cancer computer aided diagnosis and detection using CT imaging (funded by TSB); multimodal advanced MRI analysis for brain tumour grading, classification, growth modelling and therapy planning (funded by CRUK). At National Heart and Lung Institute, he was working on a cardiac MRI project funded by NIHR. Recently, he proposed a novel workflow to achieve fast acquisition, superior image quality and quantitative analysis for the late gadolinium enhancement (LGE) MRI images, which could help diagnosis, treatment planning and prognosis of atrial fibrillation patients. For this proposal, Dr. Guang Yang (Co-PI) has been awarded a British Heart Foundation project grant. Dr Yang was a recipient of the Tier 1 Exceptional Talent Visa Award that was endorsed by the Royal Academy of Engineering. He is a founder member of the IIAT in Hangzhou China (hz-iiat.cn) and an advisory board member of the Aladdin Healthcare Technologies SE (NMI:Frankfurt, aladdinid.com)
Ingmar Posner - University of Oxford
Ingmar is an Associate Professor in Engineering Science at the University of Oxford specialising in applied machine learning solutions for robot perception and decision making. He is a long-standing member of the Mobile Robotics Group (now the Oxford Robotics Institute) where he leads research in machine perception and planning. His research is guided by his vision to create machines which constantly improve through use in their dedicated workspace by implicitly leveraging expert demonstrations in a manner entirely transparent to the user. Highlights of his work include state-of-the-art approaches to deep and shallow object detection, semantic segmentation, tracking and inverse reinforcement learning. Ingmar has coauthored over 40 research publications and is the recipient of a number of best paper awards at international robotics conferences such as ISER and ICAPS. He serves on the board of IJRR, the premier international robotics research journal and has repeatedly served as area chair and programme committee member for reputed conferences in robotics and machine learning. Recently Ingmar led a team to develop and demonstrate the first autonomous urban concept vehicle on a purpose built slow-speed racetrack at Shell’s Make the Future London event. In 2014 Ingmar also co-founded Oxbotica, a leading provider of mobile autonomy software solutions including the Selenium autonomy stack, which underpins a variety of public and commercial autonomous vehicle programmes such as the LUTZ and GATEWay projects in Milton Keynes and Greenwich. In 2015 Oxbotica was singled out by the Wall Street Journal as one of the top ten EMEA technology startups.
Trevor Back - DeepMind Health
Trevor works closely with DeepMind Health NHS partners on research projects, exploring how technology could help clinicians provide more effective and faster care to patients. This work includes a partnership with Moorfields Eye Hospital NHS Trust, where Trevor's collaborating with NHS clinical researchers to use machine learning to help analyse eye scans, with the goal of getting faster treatment to people with early symptoms of some of the major causes of sight loss. He joined DeepMind having completed a Ph.D. in Astrophysics at the Royal Observatory in Edinburgh.
FRONTIERS & CHALLENGES
Li Erran Li - Pony.ai/Columbia University
Machine Learning for Autonomous Driving: Recent Advances and Future Challenges
Tremendous progresses have been made in applying machine learning to autonomous driving. However, there are fundamental challenges ahead. In this talk, I will present recent advances in applying machine learning to solving the perception, prediction, planning and control problems of autonomous driving. I will discuss key research challenges in learning more robust and abstract representations, scene understanding, behavior prediction, and decision-making in complex real-world scenarios.
Dr. Li Erran Li is the chief scientist at Pony.ai and an adjunct professor at Columbia University. Prior to joining Pony.ai, he was with the perception team at Uber ATG and machine learning platform team at Uber. There, Erran worked on deep learning for autonomous driving, led the machine learning platform team technically and drove strategy for company-wide artificial intelligence initiatives. Before Uber, Erran worked at Bell Labs. Dr. Li’s current research interests are machine learning, computer vision, learning-based robotics and their application to autonomous driving. Dr. Li has a PhD from the Computer Science Department at Cornell University. Dr. Li is an IEEE Fellow and an ACM Fellow.
Will Fletcher - Datatonic
Celebrating Dataset Diversity in Recommender Problems
At Datatonic, we know that real datasets don’t always resemble those we use when studying machine learning. Useful though they are, the Netflix Prize or the MovieLens benchmarks are not fully representative of the sorts of data available in commercial recommendation scenarios. Should we always reshape the problem to fit existing solutions, or can we be more sensitive? The right method for a given problem depends not only on the desired outcome (an item to recommend) but also the available inputs. With care, incorporating fragmented information at the right time in the design process can provide the best opportunity for learning meaningful models. Here we explore a variety of data schematics and encourage modular thinking to design suitable architectures.
Will Fletcher is an ML Researcher at Datatonic. He comes from an academic background in Chemistry at Oxford University, followed by Computational Statistics and Machine Learning at UCL. His current role concentrates on bringing a variety of machine learning techniques to clients’ data to help them extract value, with a particular focus on recommendation and retrieval problems. Working with Google Cloud, he delivers trainings in machine learning for business users. Will’s interests within ML include Bayesian Networks, information density, and versatile embeddings.
PANEL: Explainable AI - Solving the Black Box Problem
END OF SUMMIT
Hado van Hasselt - DeepMind
Reinforcement learning is the science of how to learn to make decisions through interaction. Deep learning is the science of learning representations, mappings, and functions from data. The intersection of these two fields has great potential, because deep reinforcement learning algorithms give us a means to learn behaviour through interaction with complex problems, by using general learning algorithms rather than having to rely on extensive domain knowledge. In my talk, I will discuss why I'm excited about this research field, I will talk about open challenges, and discuss some recent successes in applying these algorithms to interesting problems.
Translating Guidelines into Action: Your Q&A - Ethics in AI
In early 2019, the High-Level Expert Group on AI presented a set of ethics guideliens for trustworthy AI. What do these mean and how can you apply best practice in your organisation? Join us for a 'town-hall' style session where attendees will have the opportunity to ask experts their questions and practical concerns, and to offer some key takeaways and best practice for attendees to apply in their own work.
Rising Stars: The Next Generation of AI Pioneers - The Future of AI
Session takeaways: 1) What is emerging in the field? 2) What industry do the next generation believe will be most impacted by AI over the coming years? 3) What inspired these rising stars to explore STEM and what is the role of all stakeholders in encouraging young people to do the same?
How AI is Rewiring How Organisations Think - Dyson
We will hear from Ryan den Rooijen, Global Director of Data Services at Dyson on their digital transformation journey and how they are driving change internally.
Vishal Motwani - Einstein.ai, Salesforce
BehindTheScenes - Salesforce Voice Assistant
Using Deep Learning technology, we intend to transform unstructured data like voice and text to more structured data which empowers users to execute actions. This is a huge problem statement in itself even if one company was to build an assistant to interact with their organization, but it is even more challenging to build this as a platform while being agnostic of underlying details.
Level of expertise to benefit from the session: It is appreciated for the audience to have some amount of technical knowledge as it would entail different parts required to build such an assistant.
Vishal is a Member of Technical Staff at Salesforce, where he works on Einstein Voice Assistant, bringing the best in class NLP capabilities from Salesforce AI Research to production. Prior to Salesforce, he has contributed to research evaluating impact of lawyers voice in US Supreme Court case outcomes. He has also worked on predicting optimal distribution of bikes in New York City. He holds a Masters in Computer Science from New York University's Courant Institute of Mathematical Sciences where he spent his time studying Natural Language Processing, Computer Vision and more.
Preparing Your Dataset for ML - Technical Lab
Session takeaways: 1) Why is data preparation important? 2) Where should you gather the data; 3) How do you handle missing data
This session is for everyone.
Investing in Startups: Hear from the Investors - Panel & Connect
Session takeaways: 1) What are the short, medium and long-term challenges in investing in AI to solve challenges in business & society? 2) What are the main success factors for AI startups? 3) What are the challenges from a VC perspective?
RE•ACT - Blue-Sky Thinking
The RE•ACT series gives you the chance to have your say and voice your thoughts on the future of AI and how together we can solve challenges on this planet. Look out for the RE•ACT boards at the event and get involved in the polls and discussions on the event app. We'll come together at RE•ACT to dive deep into the impact of AI on the environment, education, diversity, and more. What partnerships need to be formed to move this problem forwards? What is the role of industry and academia to solve this challenge in question? This session is open to all and you are more than welcome to share as much or as little as you choose.