REGISTRATION & LIGHT BREAKFAST
DEEP REINFORCEMENT LEARNING - THE FOUNDATIONS
Exploring the Fundamentals of Reinforcement Learning
Jacob Andreas - MIT/Microsoft Semantic Machines
Learning to Act by Learning to Describe
The named concepts and compositional operators in natural language are a rich source of information about the kinds of abstractions humans use to interact with the world. Can we use this linguistic background knowledge to build more effective intelligent agents? This talk will explore two problems at the intersection of language and reinforcement learning: using interaction with the world to improve language generation, and using models for language generation to efficiently train reinforcement learners.
Jacob Andreas is an assistant professor at MIT and a senior researcher at Microsoft Semantic Machines. His research focuses on language learning as a window into reasoning, planning and perception, and on more general machine learning problems involving compositionality and modularity. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been the recipient of an NSF graduate fellowship, a Facebook fellowship, and paper awards at NAACL and ICML.
Shane Gu - Google Brain
Deep Reinforcement Learning Toward Robotics
Deep reinforcement learning (RL) has shown promising results for learning complex sequential decision-making behaviors in various environments from computer games, the game of Go, to simulated humanoids. However, most successes have been exclusively in simulation, and results in real-world applications such as robotics are limited, largely due to poor sample efficiency of typical deep RL algorithm and other challenges. In this talk, I present essential components for deep reinforcement learning in the wild. First, I will discuss methods improve performance and sample efficiency of the core RL algorithms, blurring the boundaries among classic model-based RL, off-policy and on-policy model-free RL. In the latter part, I illustrate other practical challenges for enabling autonomous learning agents in the real world, particularly that current RL formulations require constant human interventions for safety, resets, and reward engineering, and do not scale to learn diverse skills. I present our recent work to address those challenges and show pathways to achieve continually learning robots in the real world.
Shane Gu is a Research Scientist at Google Brain, where he mainly works on problems in deep learning, reinforcement learning, robotics, and probabilistic machine learning. His recent research focuses on sample-efficient RL methods that could scale to solve difficult continuous control problems in the real-world, which have been covered by Google Research Blogpost and MIT Technology Review. He completed his PhD in Machine Learning at the University of Cambridge and the Max Planck Institute for Intelligent Systems in Tübingen, where he was co-supervised by Richard E. Turner, Zoubin Ghahramani, and Bernhard Schölkopf. During his PhD, he also collaborated closely with Sergey Levine at UC Berkeley/Google Brain and Timothy Lillicrap at DeepMind. He holds a B.ASc. in Engineering Science from the University of Toronto, where he did his thesis with Geoffrey Hinton in distributed training of neural networks using evolutionary algorithms.
ADVANCING RESEARCH METHODS & TOOLS
Ofir Nachum - Google Brain
Learning Abstractions with Hierarchical Reinforcement Learning
Hierarchical RL has long held the promise of enabling deep RL to solve more complex and temporally extended tasks by abstracting away lower-level details from a higher-level agent. In this talk, we describe how to turn this promise into a reality. We present a hierarchical design in which a higher-level agent solves a task by iteratively directing a lower-level policy to reach certain goals. We describe how both levels may be trained concurrently in a highly-efficient, off-policy manner. Furthermore, we present a provably-optimal technique for learning abstract notions of `goals' without explicit supervision. Our resulting method achieves excellent performance on a suite of difficult navigation tasks.
Ofir Nachum currently works at Google Brain as a Research Scientist. His research focuses on reinforcement learning, with notable work including PCL (path consistency learning) and HIRO (hierarchical reinforcement learning with off-policy correction). He received his Bachelor's and Master's from MIT. Before joining Google, he was an engineer at Quora, leading machine learning efforts on the feed, ranking, and quality teams.
Deep Inverse Reinforcement Learning
Jeff Clune - Uber AI Labs
Go-Explore: A New Type of Algorithm for Hard-exploration Problems
A grand challenge in reinforcement learning is producing intelligent exploration, especially when rewards are sparse or deceptive. I will present Go-Explore, a new algorithm for such ‘hard exploration problems.’ Go-Explore dramatically improves the state of the art on benchmark hard-exploration problems, enabling previously unsolvable problems to be solved. I will explain the algorithm and the new research directions it opens up. I will also explain why we believe it will enable progress on previously unsolvable hard-exploration problems in a variety of domains, especially the many that harness a simulator during training (e.g. robotics). More information can be found at https://eng.uber.com/go-explore
Jeff Clune is the Loy and Edith Harris Associate Professor in Computer Science at the University of Wyoming and a Senior Research Scientist and founding member of Uber AI Labs. He focuses on robotics, reinforcement learning, and training neural networks either via deep learning or evolutionary algorithms. He has also researched open questions in evolutionary biology using computational models of evolution, including the evolutionary origins of modularity, hierarchy, and evolvability. Prior to becoming a professor, he was a Research Scientist at Cornell University, received a PhD in computer science and an MA in philosophy from Michigan State University, and received a BA in philosophy from the University of Michigan.
Marc G. Bellemare - Google Brain
Understanding How Value Predictions Shape Deep Representations
A reinforcement learning agent is only as good as its internal representation of the environment. To wit, a great part of the success of deep reinforcement learning (deep RL) is due to the ease with which its algorithms adapt their state representations; improving our control over this process is a necessary step towards taking reinforcement learning into everyday usage. This talk presents some of our recent work on demystifying the mechanisms by which deep RL algorithms acquire their representations, and explaining why some methods are more successful than others. In particular, I will show how a certain class of auxiliary predictions, derived from the notion of an adversarial value function, help shape good representations. I will illustrate these findings with useful visualizations of the representation learning process in the context of Atari game-playing and on synthetic environments.
Marc G. Bellemare is a research scientist at Google Brain in Montreal, Canada; a CIFAR Learning in Machines & Brain Fellow; adjunct professor at McGill University; and was recently awarded a Canada CIFAR AI Chair, held at the Montreal Institute for Learning Algorithms (Mila). He received his Ph.D. from the University of Alberta where he studied the concept of domain-independent agents and built the highly-successful Arcade Learning Environment, the platform for AI research on Atari 2600 games. From 2013 to 2017 he was research scientist at DeepMind where he made important contributions to the field of deep reinforcement learning. He is known for his work on reinforcement learning, including approximate exploration, representation learning, and the distributional method.
Ashley Edwards - Uber AI Labs
Learning Values and Policies from State Observations
Observational learning is a key component for human development that enables solving tasks by observing others perform them. For example, we might learn to cook a new dish by watching a video of it being prepared. Notably, we are capable of mirroring behavior through only the observation of state trajectories without direct access to the underlying actions (e.g., the exact kinematic forces) and intentions that yielded them. In order to be general, artificial agents should also be equipped with the ability to quickly solve problems after observing the solution. In this presentation, I will first discuss an approach for inferring values directly from state observations that can then be used to train reinforcement learning agents. Then, I will describe an approach that enables learning a latent policy directly from state observations, which can then be quickly mapped to real actions in the agent’s environment.
Ashley Edwards is a research scientist at Uber AI Labs and recently obtained her PhD in computer science from Georgia Tech. Her research focuses on deep reinforcement learning, imitation learning, and model-based RL problems, with an emphasis on developing general goal representations that can be used across task environments. During her time as a PhD student at Georgia Tech, she was a recipient of the NSF Graduate Research Fellowship, was a visiting researcher at Waseda University in Japan as part of the NSF Grow program, and interned at Google Brain. She received a B.S. in Computer Science from the University of Georgia in 2011.
Junhyuk Oh - DeepMind
AlphaStar: Mastering the Real-Time Strategy Game StarCraft II
Deep reinforcement learning approaches have been shown to perform well on domains where tasks and rewards and well-defined. However, in adversarial multi-agent environments, where the agent is required to improve its policy through self-play, the agent should not only solve the given task (i.e., learning to beat itself via self-play) but also develop diverse policies and strategies over time in order to become strong and robust when playing against unseen competitors. In this talk, I will present AlphaStar which is the first AI to defeat a top professional player in the game of Starcraft, one of the most challenging Real-Time Strategy (RTS) games. Specifically, I will show how such complex and robust strategies can emerge through a distributed multi-agent RL algorithm, where a population of agents compete with each other with slightly different internal goals.
Junhyuk Oh is a research scientist at DeepMind. He received his Ph.D. from Computer Science and Engineering at the University of Michigan in 2018, co-advised by Prof. Honglak Lee and Prof. Satinder Singh. His research focuses on deep reinforcement learning problems such as dealing with partial observability, generalization, planning, and multi-agent reinforcement learning. His work was featured at MIT Technology Review and Daily Mail.
Karl Cobbe - OpenAI
Quantifying Generalization in Deep Reinforcement Learning
Among the most common benchmarks in deep RL, it is customary to use the same environments for both training and testing. Unfortunately, this practice offers relatively little insight into an agent’s ability to generalize. To address this issue, I will introduce a procedurally generated environment called CoinRun, which provides distinct sets of levels for training and testing. Using this benchmark, I will show that agents overfit to surprisingly large training sets. I will then show that deeper convolutional architectures improve generalization, as do methods traditionally found in supervised learning, including L2 regularization, dropout, data augmentation and batch normalization.
Karl Cobbe is currently a research scientist at OpenAI. He received his BS in computer science with distinction from Stanford University in 2014. He first joined OpenAI as a research fellow, working under the mentorship of John Schulman. His research primarily focuses on generalization and transfer in deep reinforcement learning. Karl is particularly interested in leveraging procedural generation to create diverse training environments, to better investigate the limitations of current algorithms and the factors that lead to overfitting.
One-Shot Reinforcement Learning for Navigation
FRONTIERS & CHALLENGES
Alekh Agarwal - Microsoft Research AI
Towards a theory of sample efficient Reinforcement Learning with rich observations.
How can we tractably solve sequential decision making problems where the learning agent receives rich observations? We will summarize a set of recent results in this direction which study a family of RL problems called Contextual Decision Processes (CDPs). CDPs generalize MDPs and POMDPs, and describe a fairly general set of sequential decision making problems, so that any sample-efficient method in this model has broad applicability. We will discuss different structural properties which enable sample-efficient model-free as well as model-based techniques. We will primarily focus on an algorithm which is both computationally practical and theoretically sound, which is appearing at ICML 2019. The talk will also familiarize the audience with the broader research activities related to reinforcement learning in Microsoft Research AI.
This talk is based on joint works with several collaborators and based on the papers: https://arxiv.org/abs/1610.09512, https://arxiv.org/abs/1803.00606, https://arxiv.org/abs/1811.08540 and https://arxiv.org/abs/1901.09018.
Alekh Agarwal is a Senior Researcher in Microsoft Research AI, where he leads the reinforcement learning group. Prior to joining Microsoft Research AI, Alekh obtained his PhD from UC Berkeley and then spent six years in the NYC lab of MSR. His research focuses on several aspects of interactive learning including reinforcement learning, contextual bandits and online learning. He has also worked extensively in stochastic and distributed optimization, and received the best paper award at NeurIPS 2015.
Rupam Mahmood - Kindred
Reproducibility in Reinforcement Learning with Physical Robots
Recent breakthroughs in computer games and board games have shown the power and promise of deep reinforcement learning (RL) approaches in sequential decision making. While these advances have inspired many to apply deep RL techniques in real-world problems, applications of these techniques for moment-by-moment control of real robots has been a challenge. At the same time, researchers have recently identified a deepening crisis of reproducibility in deep RL research, hindering effective sharing of knowledge. In this talk, I present insights based on our recent works at Kindred which indicate that these two challenges are closely related and the reproducibility crisis of deep RL can be far worse with physical robots. In our work, we systematically investigate these challenges and suggest steps that enable learning nearly as reliably with physical robots as with virtual ones. This allowed us to perform extensive deep RL research such as hyperparameter study of learning algorithms on multiple tasks as well as solve challenging problems such as docking to a charging station by a mobile robotic base solely using real interactions. We incorporated our insights into SenseAct, an open-source toolkit for real-world robot learning, providing implementations of six different RL tasks with three different commercially available robots as well as a framework for implementing new tasks efficiently. SenseAct has facilitated reproducibility of learning results in physical environments and allowed us to prototype learning solutions directly in production setups, bringing deep RL one step closer to the real world.
Rupam Mahmood is Lead of the AI Research team at Kindred, where he designs and studies learning systems for controlling Kindred's robotic products. His primary objective is to understand the underlying principle behind real-time goal-driven systems by building them for robots. He is the creator of SenseAct, the first open-source toolkit for real-time reinforcement learning with physical robots. He received his Ph.D. in statistical machine learning from the department of computing science at the University of Alberta. He was supervised by Richard Sutton. During his graduate studies, he developed and studied learning-rate adaptation, representation search, and off-policy learning algorithms, a class of methods for learning behaviors and rich knowledge representations in a counter-factual manner.
PANEL: Reproducibility in Reinforcement Learning
Conversation & Drinks
Deep RL: Learning to Navigate
Causal Learning vs Reinforcement Learning
Alicia Kavelaars - OffWorld
An Industrial AI Revolution in Space Starts Deep Underground on Earth
Will we ever have an Industrial AI sector in space? Many startups are working on developing intelligent space robotic systems that will help humans settle in space. However, launching and testing these systems in space is extremely costly and drawn-out. Instead, developing a rugged swarm robotic system for terrestrial applications is not only feasible, but enables the production of thousands of AI led industrial robots that can revolutionize industrial sectors on Earth today. Starting with mining, we can revolutionize the way we work in extreme environments in our home planet as a test bed for other planetary bodies tomorrow.
Alicia is Co-Founder and Chief Technology Officer at OffWorld Inc. She brings over 15 years of experience in the aerospace industry developing and successfully launching systems for NASA, NOAA and the Telecommunications industry. In 2015, Alicia made the jump to New Space to work on cutting edge innovation programs. In her tenure at OffWorld, Alicia has led the development of AI based rugged robots that will be deployed in one of the most extreme environments on Earth as a precursor to swarm robotic space operations: deep underground mines. Alicia holds a MSc. and PhD from Stanford University and a BSc. in Theoretical Physics from UAM, Spain.
DEEP RL & NLP
Mohammad Norouzi - Google Brain
Reinforcement Learning Meets Sequence Prediction
Neural sequence to sequence models have seen remarkable success across a range of tasks including machine translation and speech recognition. I will give an overview of the dominant approach to supervised sequence learning using neural networks. Then, I will present optimal completion distillation (OCD) -- a new approach for training sequence models based on their own mistakes. Given a partial sequence generated by a model, OCD identifies the set of optimal suffixes and accordingly, teaches the model to optimally extend each prefix. OCD achieves the state-of-the-art performance on end-to-end speech recognition on standard benchmarks. In the second half of the talk, I will focus on sequence modeling tasks that involve discovering latent programs as part of the optimization. I will present our approach called memory augmented policy optimization (MAPO) that improves upon REINFORCE by expressing the expected return objective as a weighted sum of two terms: an expectation over a memory of trajectories with high rewards, and a separate expectation over the trajectories outside of the memory. MAPO achieves the state-of-the-art on standard semantic parsing datasets.
Mohammad Norouzi is a senior research scientist at Google Brain in Toronto. His research lies at the intersection of deep learning, natural language processing, and computer vision. His current research focuses on learning statistical models of sequential data and advancing reinforcement learning algorithms and applications. He earned the PhD in computer science at the University of Toronto, under the supervision of Prof. David Fleet working on scalable similarity search algorithms. He was a recipient of the prestigious Google US/Canada PhD fellowship in machine learning.
DRL & ROBOTICS
Pulkit Agrawal - UC Berkeley
Continually Evolving Machines: Learning by Experimenting
An open question in artificial intelligence is how to endow agents with common sense knowledge that humans naturally seem to possess. A prominent theory in child development posits that human infants gradually acquire such knowledge through the process of experimentation. According to this theory, even the seemingly frivolous play of infants is a mechanism for them to conduct experiments to learn about their environment. Inspired by this view of biological sensorimotor learning, I will present my work on building artificial agents that use the paradigm of experimentation to explore and condense their experience into models that enable them to solve new problems. I will discuss the effectiveness of my approach and open issues using case studies of a robot learning to push objects, manipulate ropes, finding its way in office environments and an agent learning to play video games merely based on the incentive of conducting experiments.
Pulkit earned his Ph.D. in computer science from UC Berkeley and co-founded SafelyYou Inc. He will be starting as an Assistant Professor at MIT in the Fall of 2019. His research interests span robotics, deep learning, computer vision, and computational neuroscience. Pulkit completed his bachelors in Electrical Engineering from IIT Kanpur and was awarded the Director’s Gold Medal. His work has appeared multiple times in MIT Tech Review, Quanta, New Scientist, NYPost, etc. He is a recipient of Signatures Fellow Award, Fulbright Science and Technology Award, Goldman Sachs Global Leadership Award, OPJEMS and Sridhar Memorial Prize among others. Pulkit holds a “Sangeet Prabhakar” (equivalent to bachelors in Indian classical music) and occasionally performs in music concerts.
Dhruv Batra - Georgia Institute of Technology/Facebook AI Research (FAIR)
Habitat: A Platform for Embodied AI Research
We present Habitat, a new platform for the development of embodied artificial intelligence (AI). Training robots in the real world is slow, dangerous, expensive, and not easily reproducible. We aim to support a complementary paradigm – training embodied AI agents (virtual robots) in a highly photorealistic 3D simulator before transferring the learned skills to reality.
The ‘software stack’ for training embodied agents involves datasets providing 3D assets, simulators that render these assets and simulate agents, and tasks that define goals and evaluation metrics, enabling us to benchmark scientific progress. We aim to standardize this entire stack by contributing specific instantiations at each level: unified support for scanned and designed 3D scene datasets, a new simulation engine (Habitat-Sim), and a modular API (Habitat-API).
The Habitat architecture and implementation combine modularity and high performance. For example, when rendering a realistic scanned scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded and can reach over 10,000 FPS multi-process on a single GPU! Finally, we describe the Habitat Challenge, an autonomous navigation challenge that aims to benchmark and advance efforts in embodied AI.
Dhruv Batra is an Assistant Professor in the School of Interactive Computing at Georgia Tech and a Research Scientist at Facebook AI Research (FAIR).
His research interests lie at the intersection of machine learning, computer vision, natural language processing, and AI, with a focus on developing intelligent systems that are able to concisely summarize their beliefs about the world with diverse predictions, integrate information and beliefs across different sub-components or `modules' of AI (vision, language, reasoning, dialog), and interpretable AI systems that provide explanations and justifications for why they believe what they believe.
In past, he has also worked on topics such as interactive co-segmentation of large image collections, human body pose estimation, action recognition, depth estimation, and distributed optimization for inference and learning in probabilistic graphical models.
He is a recipient of the Office of Naval Research (ONR) Young Investigator Program (YIP) award (2017), the Early Career Award for Scientists and Engineers (ECASE-Army) (2015), the National Science Foundation (NSF) CAREER award (2014), Army Research Office (ARO) Young Investigator Program (YIP) award (2014), Outstanding Junior Faculty awards from Virginia Tech College of Engineering (2015) and Georgia Tech College of Computing (2018), two Google Faculty Research Awards (2013, 2015), Amazon Academic Research award (2016), Carnegie Mellon Dean's Fellowship (2007), and several best paper awards (EMNLP 2017, ICML workshop on Visualization for Deep Learning 2016, ICCV workshop Object Understanding for Interaction 2016) and teaching commendations at Virginia Tech. His research is supported by NSF, ARO, ARL, ONR, DARPA, Amazon, Google, Microsoft, and NVIDIA. Research from his lab has been extensively covered in the media (with varying levels of accuracy) at CNN, BBC, CNBC, Bloomberg Business, The Boston Globe, MIT Technology Review, Newsweek, The Verge, New Scientist, and NPR.
From 2013-2016, he was an Assistant Professor in the Bradley Department of Electrical and Computer Engineering at Virginia Tech, where he led the VT Machine Learning & Perception group and was a member of the Virginia Center for Autonomous Systems (VaCAS) and the VT Discovery Analytics Center (DAC). From 2010-2012, he was a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC), a philanthropically endowed academic computer science institute located on the University of Chicago campus. He received his M.S. and Ph.D. degrees from Carnegie Mellon University in 2007 and 2010 respectively, advised by Tsuhan Chen. In past, he has held visiting positions at the Machine Learning Department at CMU, CSAIL MIT, Microsoft Research, and Facebook AI Research.
Jeannette Bohg - Stanford University
Presentation Title: Making Sense of Vision and Touch in Robot Manipulation Tasks
Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University and leads the Interactive Perception and Robot Learning lab. She was a group leader at the MPI for Intelligent Systems (Tübingen, Germany) until September 2017 and remains affiliated as a guest researcher. Before joining MPI-IS in January 2012, Jeannette Bohg was a PhD student at the Computer Vision and Active Perception lab (CVAP) at KTH in Stockholm. Her thesis on Multi-modal scene understanding for Robotic Grasping was performed under the supervision of Prof. Danica Kragic. She studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning.
Roberto Calandra - Facebook AI Research (FAIR)
Robots and the sense of Touch
Humans make extensive use of touch. However, integrating the sense of touch in robot control has traditionally proved to be a difficult task. In this talk, I will discuss how machine learning can help to provide robots with the sense of touch, and the benefits of doing so.
Roberto Calandra is a Research Scientist at Facebook AI Research (FAIR). Previously, he was a postdoctoral scholar at UC Berkeley in the Berkeley Artificial Intelligence Research Laboratory (BAIR) working with Sergey Levine. Roberto received a Ph.D. from TU Darmstadt (Germany) under the supervision of Jan Peters and Marc Deisenroth, a M.Sc. in Machine Learning and Data Mining from the Aalto university (Finland), and a B.Sc. in Computer Science from the Università degli studi di Palermo (Italy). His scientific interests focus at the conjunction of Machine Learning and Robotics, in what is know as Robot Learning.
DRL APPLICATIONS IN THE REAL WORLD
Finance: Deep RL for Portfolio Management
Bastiane Huang - Osaro
Robot 2.0: Deep Reinforcement Learning For Industrial Robotics
Machine learning has enabled a move away from manually programming robots to allowing machines to learn and adapt to changes in the environment. We will discuss how AI-enabled robots are currently used in warehouse automation and how we can use warehouse robotics as a crystal ball and an example for other industries such as manufacturing and food assembly. We will also describe recent progress in deep reinforcement learning, imitation learning, etc. and discuss the real world requirements and challenges of various industrial problems, pipelined versus end to end systems, and the technology Osaro has developed as it addresses the challenges in industrial robotics.
Bastiane Huang leads product strategy at Osaro, a San Francisco based company building Deep Reinforcement Learning software for industrial robots, backed by Peter Thiel and Jerry Yang’s AME Cloud. Bastiane has close to a decade of experience in the automation and manufacturing industries. Her experience in the field started in 2009 at e2v, a British space and industrial image sensor and machine vision camera manufacturer that is now part of Teledyne. She has broad experience in product marketing, business development, and operations at international technology companies across the industrial automation, IoT, AI, and robotics industries. She drove the formation and growth of a new AI software business at Advantech, the world’s biggest industrial computer manufacturer. The product offered video analytics solutions to improve traffic congestion and shopping experiences through people counting, and facial and heat map analysis. She was also an investor and advisor to early stage IoT and AI startups in the U.S. and Greater China, and previously worked as a Senior Product Manager at Amazon Alexa. In addition, she is actively involved with Harvard’s ‘Managing the Future of Work’ initiative on AI and robotics, writing case studies and articles for Harvard Business Review and Robotics Business Review. Bastiane holds a B.S. in Information Management (2009) from National Taiwan University and an M.B.A in Technology and Entrepreneurship (2018) from Harvard Business School.
Gaming: Mastering Gaming & Learning to Coordinate
Media: Generating Text with Deep Reinforcement Learning
END OF SUMMIT
An Introduction to DRL - Presentation & Q&A
Session takeaways: 1) An introduction to DRL models, algorithms and techniques; 2) Examples of DRL systems; 3) How DRL can be used for practical applications.
This session is for C-level and Senior Level Executives.
Creating a Purpose-Built AIOps Platform for IT - Moogsoft
Moogsoft develops AIOps technology that helps enterprise ITOps and DevOps teams become faster, smarter and more effective. Moogsoft AIOps’ real-time machine learning algorithms help teams remediate issues that impact their customers’ experience by: 1) Reducing operational noise (alert fatigue) across your production stack. 2) Proactively detecting incidents and correlating events across your monitoring ecosystem. 3) Streamlining collaboration and workflow across teams and toolsets. 4) Codifying knowledge to make operators smarter when encountering future Incidents.
How to Find the Best ML Framework for Your Business - Presentation & Open-Floor Q&A
Session takeaways: 1) Evaluate what is the purpose of the framework - what do you want to accomplish? 2) The strenghts and weaknesses of common frameworks; 3) What programing language will be used to develop models?
This session is for data scientists and machine learning engineers.
Preparing Your Dataset for ML - Technical Lab
Session takeaways: 1) Why is data preparation important? 2) Where should you gather the data; 3) How do you handle missing data
This session is for everyone.
Rising Stars: The Next Generation of AI Pioneers - The Future of AI
Session takeaways: 1) What is emerging in the field? 2) What industry do the next generation believe will be most impacted by AI over the coming years? 3) What inspired these rising stars to explore STEM and what is the role of all stakeholders in encouraging young people to do the same?
Reinventing your Company with AI & Becoming a Cognitive Enterprise - IBM
IBM Services partners with the world's leading companies to reimagine and reinvent to build smart business -- from end to end and from both the outside in and the inside out. Our clients are optimizing processes with emerging technologies to deliver more intelligent workflows as an integral part of transforming into a cognitive enterprise. We partner with clients to design solutions, modernize and optimize their business, deliver sustained value and empower their people. Our end-to-end capabilities take clients from inception to support and are backed by the power of the full IBM portfolio. A new era for business has arrived. ibm.com/services/process
The session is part of the Decision Maker Access pass. View further information on this here.
The Value of AI & ML in Digital Transformation - Presentation & Roundtable discussion
Session Takeaways: 1) Discover how to accelerate ML initiatives in your business; 2) What does Artificial Intelligence and Machine Learning assisted digital transformation look like in practice, and what are the benefits of adoption?; 3) Design a framework on where to start.
The session is for C-level and senior executives - responsible for driving digital transformation, innovation and introducing new tech methods and tools.
The session is part of the Decision Maker Access pass. View further information on this here.
Breakthrough Challenges in Industry: Cross-Industry Learnings from Financial & Healthcare Sectors - Case Study & Roundtable Discussion
Session takeaways: 1) What are some of the main short, medium & long-term issues in integrating AI into healthcare & financial sectors? 2) What learnings can be shared cross-industry? 3) What are the top benefits for integrating AI into each sector?
Investing in AI Startups: AI with Impact - Panel Discussion
Session takeaways: 1) What are the short, medium and long-term challenges in investing in AI to solve important problems in society? What are the main success factors for AI startups? 3) What are the challenges from a VC perspective?
Ethics Workshop: Scenario-Based - Case Studies & Brainstorming
Session takeaways: 1) What are the key ethical questions that should be considered for AI implementation/projects? 2) How should we determine the success and fairness of AI? 3) How can we work through challenges & issues in a collective and inclusive manner?
The Challenges of Accessible AI - Roundtable Discussion
Session takeaways: 1) What do we mean by accessible AI and why is it important? 2) How can we as individuals and companies play our part to increase the accessibility of AI; 3) How can an increased AI reach impact the application of AI?
Question Wall Discussions - Roundtable Discussions
Session takeaways: 1) Is AI the answer to solving this challenge, and if so, why? 2) What partnerships need to be formed to move this problem forwards? 3) What is the role of industry and academia to solve this challenge in question?
Building A Framework for RL - Technical Lab
Session takeaways: 1) An overview of promising open source tools 2) What are the challenges associated with training RL agents? 3) Building scalable frameworks
This session is for: Data scientists, ML Engineers, RL Engineers