Schedule

08:15

REGISTRATION

09:00

WELCOME

Lex Fridman

Lex Fridman, MIT

Compere

Deep Learning for Self-Driving Cars

We will provide an overview of how deep neural network based approaches can contribute to each individual component of an autonomous vehicle including scene perception, scene understanding, localization, mapping, control, planning, driver sensing, and the end-to-end driving task. We will discuss the strengths and limitations of the fundamental deep learning methods involved, including convolutional neural networks, recurrent neural networks, and policy networks for deep reinforcement learning in a complex, sparsely-supervised, safety-critical world.

Lex Fridman is a postdoc at MIT, working on computer vision and deep learning approaches in the context of self-driving cars with a human-in-the-loop. His work focuses on large-scale, real-world data, with the goal of building intelligent systems that have real world impact. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, activity recognition, and optimal resource allocation on multi-commodity networks. Before joining MIT, Lex was at Google working on machine learning and decision fusion methods for large-scale behavior-based authentication.

Buttontwitter Buttonlinkedin

AUTONOMOUS VEHICLES LANDSCAPE

09:15

Pratik Prabhanjan Brahma

Pratik Prabhanjan Brahma, Audi/VW Electronics Research Lab

Challenges & Directions for Applying Machine Learning in Autonomous Vehicles

Challenges & Directions for Applying Machine Learning in Autonomous Vehicles

The rapid growth in the field of self-driving cars and connected vehicles has been fairly recent yet phenomenal. We are witnessing a greater collaboration between the industries, academia and also the open source community to set foot into the higher levels of autonomy. While it is exciting to see the continuously increasing usage of AI and deep machine learning algorithms in this domain, one needs to be aware of the challenges that come along with these intelligent modules under various circumstances. There are challenges from functionality to security, online to offline processing and even in communicating the autonomous capabilities and limitations to the end consumer. This talk will try to throw light on some of these problems and possible solutions. This talk will also present some of our in house R&D and how we see the future of autonomous vehicles.

Pratik Prabhanjan Brahma is a machine learning research engineer at the Volkswagen Group of America Electronics Research Labarotary (ERL) in Belmont, California. His current research focusses on developing deep learning technologies for autonomous driving perception modules. He received his Ph.D. from the University of Florida and he has multiple patents and publications in the field of machine learning. He received his Bachelor’s and Master’s from the Indian Institute of Technology, Kharagpur. He was one of the top 30 selected candidates from Indian National Mathematics Olympiad of 2005. He has also worked with the research labs of several multinational companies like Philips, National Instruments and Procter & Gamble.

Buttonlinkedin

09:40

Lex Fridman

Lex Fridman, MIT

Deep Learning for Self-Driving Cars

Deep Learning for Self-Driving Cars

We will provide an overview of how deep neural network based approaches can contribute to each individual component of an autonomous vehicle including scene perception, scene understanding, localization, mapping, control, planning, driver sensing, and the end-to-end driving task. We will discuss the strengths and limitations of the fundamental deep learning methods involved, including convolutional neural networks, recurrent neural networks, and policy networks for deep reinforcement learning in a complex, sparsely-supervised, safety-critical world.

Lex Fridman is a postdoc at MIT, working on computer vision and deep learning approaches in the context of self-driving cars with a human-in-the-loop. His work focuses on large-scale, real-world data, with the goal of building intelligent systems that have real world impact. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, activity recognition, and optimal resource allocation on multi-commodity networks. Before joining MIT, Lex was at Google working on machine learning and decision fusion methods for large-scale behavior-based authentication.

Buttontwitter Buttonlinkedin

10:05

Charlie Tang

Charlie Tang, Apple

Deep Reinforcement Learning Advancements & Applications

Deep Reinforcement Learning Advancements and Applications

Recent advances in Deep Reinforcement Learning have captured the imagination of both the AI researchers and the general public. Combining the latest Deep Learning technology with Reinforcement Learning techniques has led to stunning breakthroughs, surpassing human level performances at Atari games and the game of Go. Furthermore, Deep RL is being successfully adopted in a variety of fields such as robotics, control systems, translation, dialogue systems, and others. This talk will explore the intuitions, algorithms, and theories that have led to the recent success of Deep RL. A survey of exciting Deep RL applications and tough challenges ahead will also be discussed.

Charlie obtained his PhD in 2015 in Machine Learning from the University of Toronto, advised by Geoffrey Hinton and Ruslan Salakhutdinov. His thesis focused on various aspects of Deep Learning technology. Charlie also holds a Bachelors in Mechatronics Engineering and Masters in Computer Science from the University of Waterloo. After his PhD, along with Ruslan Salakhutdinov and Nitish Srivastava, Charlie co-founded a startup focused on the application of Deep Learning based vision algorithms. Currently, Charlie is a research scientist at Apple Inc. Charlie's research interests include Deep Learning, Vision, Neuroscience and Robotics. He is one of the few competitors to have reached the #1 ranking on Kaggle.com, a widely popular machine learning competition platform. Charlie is also a Canadian national chess master.

Buttontwitter Buttonlinkedin

10:45

COFFEE

COMMUNICATION & SIMULATION

11:25

Gaurav Kumar Singh

Gaurav Kumar Singh, Ford Motor Company

Gaming: Using Virtual World Data in Deep Learning Approaches for Autonomous Driving Research

Using Simulation Data in Deep Learning Approaches for Autonomous Driving Research

Development of Autonomous driving capabilities through machine and deep learning requires training upon huge annotated data. Obtaining such training data requires a lot of efforts, not to mention the large time required to do so. This talk will explore the possibility of accelerating autonomous driving research by training machine and deep learning models upon objects in a rich virtual world. The talk will briefly comment on how models, trained on simulated data, perform when tested with the real world driving data.

Gaurav Kumar Singh is a Machine and Deep Learning Researcher at Research and Advanced Engineering at Ford Motor Company, located in Dearborn, Michigan. He has over 6 years of research experience ranging from Control Systems to Machine Learning and Data Science. His side gigs involve consulting friends in ways to utilize machine learning techniques in their startups. He has served as project reviewer and mentor for Machine Learning and Self Driving Car Nanodegree at Udacity as well. Gaurav graduated with a Masters’ degree in Electrical and Computer Engineering from University of Michigan, Ann Arbor in December 2015. He received his Bachelors of Technology (B.Tech) degree from National Institute of Technology, Trichy, India in 2014.

Buttontwitter Buttonlinkedin

11:55

Gaurav Bansal

Gaurav Bansal, Toyota InfoTechnology Center

Cooperative Automated Driving

Gaurav is a Senior Researcher at the Toyota InfoTechnology Center in Mountain View, CA, where he leads several research initiatives on the design of communication systems for Automated Driving. Gaurav is an expert in Vehicular Communications, pioneering contributions in Dedicated Short Range Communications (DSRC) congestion control and in innovative use-cases to leverage connectivity in cars. His current research interests also include millimeter wave & full-duplex wireless communications. Gaurav represents Toyota in the Automakers’ Vehicle Safety Communication Consortium and in the SAE, ETSI standardization bodies. Gaurav's paper on DSRC Congestion Control received the Best Paper Award at the IEEE WiVEC Symposium. He also holds several patents in the field. Gaurav serves in the editorial board of IEEE Communication Surveys and Tutorials Journal and IEEE Connected Vehicles Initiative. Gaurav holds Electrical Engineering degrees from Indian Institute of Technology, Kanpur and The University of British Columbia.

Buttonlinkedin

12:25

LUNCH

THE HUMAN FACTOR

13:35

Joan Walker

Joan Walker, UC Berkeley

The Traffic Jam of Robots: Implications of Autonomous Vehicles on Trip-Making

From the engineers building the systems to the planners planning for them, people have different visions of the future of autonomous vehicles. Looking through the lens of human behavior, the talk will discuss the potential of these different futures to be realized. Research questions related to travel behavior will be discussed, current findings will be summarized, and suggestions for planning for the future will be made.

Joan Walker is a Professor of Civil Environmental Engineering at UC Berkeley where she currently serves as Co-Director of the Center for Global Metropolitan Studies. Her research focus is behavioral modeling, with an expertise in discrete choice analysis and travel behavior. She works to improve the models that are used for transportation planning, policy, and operations. She received her Bachelor's degree in Civil Engineering from UC Berkeley and her Master's and PhD degrees in Civil and Environmental Engineering from MIT. Prior to joining UC Berkeley, she was Director of Demand Modeling at Caliper Corporation and an Assistant Professor of Geography and Environment at Boston University. She is a recipient of the Presidential Early Career Award for Scientists and Engineers (PECASE) – the highest honor bestowed by the U.S. government on scientists and engineers beginning their independent careers. She is an Associate Editor of Transportation Science and the current Chair of the Committee on Transportation Demand Forecasting (ADB40) for the Transportation Research Board of the National Academies. She has worked with colleagues to launch the Zephyr Foundation for Advancing Travel Analysis Methods (zephyrtransport.org).

Buttonlinkedin

14:00

Modar Alaoui

Modar Alaoui, Eyeris

Driver Monitoring For Connected Semi-autonomous Vehicles & The Future Of Automotive HMI

Driver Monitoring For Connected Semi-autonomous Vehicles & The Future Of Automotive HMI

This session will cover an artificially intelligent driver attention, cognitive awareness and emotion distraction monitoring system. We reveal how the technology reads facial micro expressions in real time to authenticate drivers and distinctly detect their seven universal emotions, gender, age group, eye tracking, 3D head pose and gaze estimation. During the second half of this session, we will cover a number of driver derivative metrics that trigger the activation of various reactive support systems, necessary to saving lives and improving driving behavior through better Human Machine Interfaces. This session will end with a highly rated 1-minute live demo on stage!

Modar is a serial entrepreneur and expert in AI-based vision software development. He is currently founder and CEO at Eyeris, developer of a Deep Learning-based emotion recognition software, EmoVu, that reads facial micro-expressions. Eyeris uses Convolutional Neural Networks (CNN's) as a Deep Learning architecture to train and deploy its algorithm in to a number of today’s commercial applications. Modar combines a decade of experience between Human Machine Interaction (HMI) and Audience Behavioral Measurement. He is a frequent keynoter on “Ambient Intelligence”, a winner of several technology and innovation awards and has been featured in many major publications for his work.

Buttontwitter Buttonlinkedin

14:25

Patrick Lin

Patrick Lin, California Polytechnic State University

Ethics and Autonomous Vehicles: How AI Decisions Can Create New Risks

Ethics and Autonomous Vehicles: How AI Decisions Can Create New Risks

With self-driving cars, replacing the human with an AI driver also means shifting the burden of responsibilities and liabilities to technology developers. Because AI decisions are either scripted or determined by learning algorithms, they seem premeditated in an important sense, unlike a bad human reflex, for instance; and the ethical and legal implications are unclear. This talk introduces the radical challenge that AI poses to responsibility and decision-making, not just for crash scenarios but also everyday decisions that create or transfer risk, including in self-navigation decisions.

Patrick Lin, PhD, is the director of the Ethics + Emerging Sciences Group, based at California Polytechnic State University, San Luis Obispo, where he is a philosophy professor. Other current and past affiliations include: Stanford Engineering, Stanford Law, US Naval Academy, Dartmouth College, Notre Dame, World Economic Forum, and UNIDIR. He is well published in technology ethics, especially on robotics and AI—including the books Robot Ethics (MIT Press, 2012) and Robot Ethics 2.0 (Oxford University Press, forthcoming in 2017). Dr. Lin regularly gives invited briefings to industry, media, and government; and he teaches courses in ethics, technology, and law.

Buttontwitter Buttonlinkedin

DATA & PREDICTIVE INTELLIGENCE

14:50

John Cordell

John Cordell, Xevo AI

AI, The Cloud, and Data: Building the Foundation of the Autonomous Vehicle

AI, The Cloud, and Data: Building the Foundation of the Autonomous Vehicle

While Silicon Valley software companies are battling to be the first to create fully autonomous vehicles, there is a quiet evolution happening in the background that is building the foundation for these vehicles: data collection, with insights powered by AI and machine learning. AI technology is learning about driver habits from a variety of data collection points, and this data gives automakers powerful business intelligence that will help them shape the vehicles and services of tomorrow, while providing customers with an unforgettable personalized experience. In this presentation, John Cordell, Chief Product Officer of Xevo AI, will share his thoughts on how the connected vehicle technology of today and the near future will shape autonomous and self-driving systems. He will detail the technology that can be applied today to power autonomous technology of tomorrow. He will also mention his experience with Xevo in the connected car space and how machine learning in automotive can even work with small CPU power – low-cost devices already or soon to be in most cars. Self-driving cars may not be ready to hit the road just yet, but the technology of the smart cars of tomorrow can be implemented in cars today.

John Cordell is the Chief Product Officer of Xevo AI. He began his career at Microsoft during the 1990s, where he was one of the original developers of Internet Explorer. He went on to cofound Avogadro in 2000, a leading mobile software company that was acquired one year later by OpenWave. Before joining Xevo, John joined Surround.io in 2014 as a lead software architect and the head of product design. John has also worked in film writing and production in Los Angeles as a technical consultant on the AMC drama series about the early PC revolution, Halt and Catch Fire.

Buttontwitter

15:15

COFFEE

15:55

PANEL: How Can We Apply ML & Sensing Technologies to Accelerate The Autonomous Vehicle?

Ioannis Petousis

Ioannis Petousis, Renovo

Panelist

Dr. Ioannis Petousis is the Head of Data Science for Renovo. His work focuses on the analysis and manipulation of data for the next generation of automotive vehicles. He uses methods ranging from machine learning to optimal control and physical modeling. He holds a Ph.D. from Stanford University.

Buttontwitter Buttonlinkedin

Sam Kherat

Sam Kherat, Bradley University

Panelist

Dr. Sam Kherat is Adjunct Professor at Bradley University, Peoria, Illinois, responsible for robotics and other mechanical engineering courses. He joined Caterpillar, Inc. in 1996. Dr. Kherat helped found and was appointed Manager of Caterpillar’s automation center in Pittsburgh, PA, in November 2007. Prior to that, Dr. Kherat has been technical lead for Automation and Robotics programs including automated mining trucks, cycle planning, underground mining automation, and automated excavation. He was Caterpillar’s Project Manager for the DARPA Grand Challenges and the 2007 Urban Challenge won by the Caterpillar-Carnegie Mellon University team. Dr. Sam Kherat received his Master’s degree in Electrical Engineering from Bradley University, Peoria, Illinois (1987) and his Ph.D (1994) in Aeronautics and Astronautics from Purdue University, West Lafayette, Indiana.

Buttontwitter Buttonlinkedin

Tim Higgins

Tim Higgins, Wall Street Journal

Moderator

Tim Higgins is a reporter for the Wall Street Journal covering the future of cars. He focuses on Tesla, self-driving vehicles and what traditional auto makers are doing in Silicon Valley. He previously reported on Apple and spent almost a decade in Detroit covering the auto industry. Higgins wrote about the bankruptcy of General Motors and broke the news that GM would name Mary Barra as the first female CEO of a global auto maker. He can be found on Twitter @timkhiggins

Buttontwitter Buttonlinkedin

16:35

Luca Rigazio

Luca Rigazio, Panasonic Silicon Valley Laboratory

Unsupervised Everything

The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational resources and labels for supervised learning. While horizontal scalability of training is still being improved, computational resources are just a Cap-Ex issue. I argue the need for accurate labels is more than a Cap-Ex problem, as it requires careful interpretation of what to label and how, especially in complex and multi-sensory settings. At the risk of stating the obvious, we just want unsupervised learning to work for everything we do, right now. While this has been a "want" of the AI/Machine-Learning community for quite some time, unsupervised learning just made an impressive leap during the last year. I will discuss the latest breakthroughs and highlight the massive potential for autonomous systems, as well as present latest results from our team.

My belief is that intelligent software, based on AI / Machine-Learning, will take over the world, and that Autonomous-Systems, made up of autonomous agents, vehicles, robots and drones, are just around the corner. My focus is to give these machines a human touch and give humans access to their raw power without the hurdles of talking their language. I like to start from real user problems and leverage machine-learning to design solutions that tie software, hardware and sensors and achieve an high degree of autonomy as well as a high level of usability and satisfaction.

Buttontwitter Buttonlinkedin

17:00

CONVERSATION & DRINKS

08:15

COFFEE & REGISTRATION

09:00

WELCOME

Robert Seidl

Robert Seidl, Motus Ventures

Compere

Robert Seidl started his first company while still in high school a long time ago. For the past three decades, he has started a number of software companies, held engineering as well as marketing and executive leadership roles, and worked in well known Silicon Valley companies like Apple and Adobe following acquisitions. Robert has an engineering background, but has always focused on creating user friendly products. In 1995 he founded the company that built the first commercial dedicated web page editor and web site management tool: PageMill, quickly acquired by Adobe and generating over 20M in sales in its first year. In 2000, Metacreations Inc. acquired his next company, Canoma, whose technology allowed rapid creation of photorealistic, textured 3D models from images. You might also have seen these concepts in Google’s Building Maker software. Since 2002, he has managed Realtime Video Systems, a technology licensing company that provides US drone vendors with video processing and object tracking software. Meanwhile, he cofounded a MDV/Accel/Emergence/Walden funded startup called Genius.com which provides sales and marketing people at B2B companies with easy to use and realtime lead generation and nurturing tools.

Buttontwitter Buttonlinkedin

SHAPING TOMORROW

09:15

Bryan Mistele

Bryan Mistele, INRIX

Data Driven: Connecting Cars for Smarter Cities

Data Driven: Connecting Cars for Smarter Cities

With over half of the world’s population living in our cities, managing massive population growth is one of the most important development challenges of the 21st century. Today, we have 28 Megacities of 10 million or more people. By 2030, this will increase to 41 megacities placing a huge strain on our aging infrastructure.

INRIX will share how breakthroughs in location technology, connectivity and big data are poised to transform urban mobility. Through its collaboration with leading automakers and governments, INRIX will show how it’s helping reduce energy consumption, pollution and the economic toll of traffic congestion around the world.

Bryan Mistele is the co-founder, President & Chief Executive Officer of INRIX, a leading provider of real-time traffic information, connected car services and analytics worldwide. INRIX is at the forefront of connecting cars to smarter cities and serves more than 350 blue-chip customers in more than 40 countries around the world. The company leverages big data analytics to reduce the individual, economic and environmental toll of traffic congestion.

Prior to INRIX, Mistele was an executive at Microsoft, responsible for successfully building and managing four businesses within the company, including Microsoft’s Automotive, Mobile Services, Real Estate and Personal Finance/Investing business units. Prior to Microsoft, Mistele worked at the Ford Motor Company. Bryan holds a B.S. in computer engineering from the University of Michigan and an MBA from the Harvard Business School.

Buttontwitter Buttonlinkedin

09:40

Guan Wang

Guan Wang, NIO

Machine Learning for the Next Generation of Digital Cockpit

Machine Learning for the Next Generation of Digital Cockpit

As the cars are becoming self-driving, the in-cabin experiences are also being revolutionized. Equipped with machine learning engine, the car will know the driver personally, know their commute preferences, and even know their families. The cross-vehicle in-cabin knowledge discovery can make the cars capable of providing real-time assistance, such as point of interest recommendation, trip visualization and monitoring, and just-in-context services. We will discuss several data representation and machine learning techniques and system architecture towards the driver behavior learning and real time pattern discovery. It is also worth to know that these fundamental techniques are common between outside facing autonomous driving and inside facing context learning.

Guan Wang is a staff machine learning engineer at NIO (NextEV). He is a hackathon champion and entrepreneur. He is the first AI person at NIO, where he is the major designer, architect, and engineer for a range of NIO’s in-house AI products, especially w.r.t vehicle perception. Before NIO, he was the first person to bring machine learning to the business analytics department at LinkedIn and helped the team grew from 3 people to about 100 people in a short time. Guan Wang holds a Ph.D. in Computer Science specialized in machine learning and data mining from University of Illinois at Chicago with Prof. Philip S. Yu.

Buttontwitter Buttonlinkedin

10:05

George Hotz

George Hotz, comma.ai

Self Driving Lessons From Comma

George will discuss deep learning, autonomy in vehicles, and the latest updates from the comma.ai journey.

George Hotz first became known at 17 when he developed a procedure to unlock the original iPhone. He is the founder and CEO of comma.ai, building and open sourcing aftermarket, user installable, self driving kits.

Buttontwitter Buttonlinkedin

10:30

COFFEE

WHERE ARE WE HEADED?

11:15

Krishna Murthy Gurumurthy

Krishna Murthy Gurumurthy, University of Texas

Anticipating a World of Shared Fully-Automated Vehicles

Anticipating a World of Shared Fully-Automated Vehicles

Fully-Automated Vehicles (AVs) are no longer a distant dream. With every automotive & technology company and transportation researcher powering through AV related research, AVs will be available to the public in the near future. A new era in automotive technology dictates a new set of transportation problems to tackle and a new set of mobility choices to address. AVs, with the right adoption levels, can reduce the car ownership by almost 50%. Increased safety provided by AVs will result in crash savings up to $3,000. However, this comes at a cost. Automation implies higher vehicle miles travelled and possibly higher congestion levels on our network. This presentation by Krishna Murthy, on behalf of Dr. Kara Kockelman from UT, will focus on how Shared AVs can help alleviate these new complications and he will share results that back his claim on shared vehicle technology.

Krishna Murthy Gurumurthy is a Master's student at the University of Texas at Austin and a Graduate Research Assistant to Dr. Kara Kockelman. He received his Bachelors from the National Institute of Technology Karnataka, India and has been working on transportation related research since his junior year there. At UT, he is actively pursuing research on fully-automated vehicles, running agent-based microsimulations on shared fleets of fully-automated vehicles and testing the advantages of dynamic ride-sharing. He also has a nationwide survey for TxDOT in the works.

Buttonlinkedin

11:40

Mohan M Trivedi

Mohan M Trivedi, University of California

A Quest for Human-Robot Cohabitation in the Age of Self Driving Automobiles

A Quest for Human-Robot Cohabitation in the Age of Self Driving Automobiles

With recent advances in imaging sensors, embedded computing, machine perception, learning, planning and control, intelligent vehicle technology is moving tantalizingly closer to a future with large- scale deployment of self-driving automobiles on roadways. However, we are also realizing that many important issues need deeper examination so that the safety, reliability and robustness of these highly complex systems can be assured. Toward this end, we highlight research issues as they relate to the understanding of human agents interacting with the automated vehicle, who are either occupants of such vehicles, or who are in the near vicinity of the vehicles. The main idea is to develop an approach to properly design, implement and evaluate methods and computational frameworks for distributed systems where intelligent robots and humans cohabit, with proper understanding of mutual goals, plans, intentions, risks and safety parameters. We emphasize the need and the implications of utilizing a holistic approach, where driving in a naturalistic context is observed over long periods to learn behaviors of human agents in order to predict intentions and interactivity patterns of all intelligent agents. Development of highly automated vehicles opens new research avenues in machine learning, modeling, active control, perception of dynamic events, and novel architectures for distributed cognitive systems. This presentation will give examples of some of the accomplishments in the design of such systems and also highlight important research challenges yet to be overcome.

Mohan Manubhai Trivedi is a Distinguished Professor of Electrical and Computer Engineering and founding director of the Computer Vision and Robotics Research Laboratory, as well as the Laboratory for Intelligent and Safe Automobiles (LISA) at the University of California San Diego. Trivedi’s team has played a key role in several major research collaborative initiatives. These include design, development and deployment of distributed video arrays for wide area activity analysis, privacy preserving filters for surveillance video arrays for transportation infrastructures including for freeways, international bridges, and stadiums; systems for vehicle collision avoidance, pedestrian protection and intent analysis, lane-change/turn/merge assistance; vision-based systems for “smart” airbags, redictive driver intent and activity analysis systems; and panoramic-view surround safety systems and autonomous robotic teams for railway track maintenance and for hazardous environments. His team is recognized as the most prolific and most cited in the intelligent vehicles and intelligent transportations systems field. He has won over 20 “Best/Finalist” Paper award, has received the IEEE ITS Society’s Outstanding Research Award and LEAD Institution Award as well as the Meritorious Service and Pioneer Award (Technical Activities) of the IEEE Computer Society. He has given over 100 keynote/plenary talks and he regularly serves on panels dealing with technological, strategic, privacy, and ethical issues surrounding research areas he is involved in. He is a Fellow of IEEE, SPIE, and IAPR. Trivedi has served as the Robotics Technical Committee Chair for the IEEE Computer Society, on the Governing Boards of the IEEE Systems, Man & Cybernetics and ITSC Society, Editor-in-Chief of the Machine Vision Applications journal and charter member/vicechair of the University of California System wide Digital Media Innovation (UC Discovery) program. Trivedi serves regularly as a consultant to industry and government agencies in the USA and abroad.

Buttontwitter

12:05

Katherine Driggs-Campbell

Katherine Driggs-Campbell, UC Berkeley

Towards Trustworthy Autonomy: Robust, Informative Predictions & Intuitive Control Frameworks

Towards Trustworthy Autonomy: Robust, Informative Predictions and Intuitive Control Frameworks

While a future with ubiquitous autonomy approaches, the transition will not be instantaneous. This suggests: (1) levels of autonomy will be introduced incrementally and (2) autonomous vehicles must be capable of driving with humans on the road. Consequently, human drivers must be rigorously modeled in a manner that is easily integrated into control. We present robust, predictive modeling methods and innovative design approaches for optimizing interaction between humans and autonomy. These techniques were applied in safety systems for semiautonomous frameworks and autonomous vehicles that mimic nuanced human interactions. Such systems demonstrate improved predictability and trustworthiness, crucial characteristics for pervasive autonomy.

Katie is currently a PhD Candidate in Electrical Engineering and Computer Science at the University of California, Berkeley, advised by Professor Ruzena Bajcsy. Prior to that, she received a B.S.E. from Arizona State University in 2012 and a M.S. from UC Berkeley in 2015. Her research considers the integration of autonomy into human dominated fields, in terms of safe interaction in everyday life, with a strong emphasis on novel modeling methods, experimental design, and control frameworks. She received the Demetri Angelakos Memorial Achievement Award for her contributions to the community. Beyond research, Katie enjoys outreach, reading, and learning new trivia.

Buttontwitter Buttonlinkedin

12:30

LUNCH

13:30

Tim Wheeler

Tim Wheeler, Stanford Intelligent Systems Lab

Establishing Trust in Autonomous Vehicles

Establishing Trust in Autonomous Vehicles

Autonomous vehicles and other emerging active driving systems require advanced science and engineering methodologies by which trust can be established. Lack of public trust in the safety and underlying technology of autonomous vehicles currently impedes their widespread acceptance and limits the impact autonomy can have in improving both safety and efficiency. The route to building trust lies in the creation of a scientific, unified, transparent framework to optimize and evaluate active driving systems. This talk discusses the challenges in safety validation for autonomous vehicles and discusses methods for overcoming them.

Tim Allan Wheeler is a Ph.D. candidate in the Stanford Intelligent Systems Laboratory of Prof. Mykel Kochenderfer applying decision making theory to the problem of automotive safety. Tim is a Burt and Deedee McMurtry fellow. He received his B.S. in aerospace engineering from U.C. San Diego in 2013. Tim's research focuses on autonomous cars, particularly in designing tools for the rigorous analysis of active driving safety systems.

Buttontwitter Buttonlinkedin

INVESTING IN AUTONOMOUS VEHICLES

13:55

Sam Kherat

Sam Kherat, Bradley University

Autonomous Mobility & Challenges In Off-Road Applications

Dr. Sam Kherat is Adjunct Professor at Bradley University, Peoria, Illinois, responsible for robotics and other mechanical engineering courses. He joined Caterpillar, Inc. in 1996. Dr. Kherat helped found and was appointed Manager of Caterpillar’s automation center in Pittsburgh, PA, in November 2007. Prior to that, Dr. Kherat has been technical lead for Automation and Robotics programs including automated mining trucks, cycle planning, underground mining automation, and automated excavation. He was Caterpillar’s Project Manager for the DARPA Grand Challenges and the 2007 Urban Challenge won by the Caterpillar-Carnegie Mellon University team. Dr. Sam Kherat received his Master’s degree in Electrical Engineering from Bradley University, Peoria, Illinois (1987) and his Ph.D (1994) in Aeronautics and Astronautics from Purdue University, West Lafayette, Indiana.

Buttontwitter Buttonlinkedin

14:20

PANEL: What Are The Key Opportunities & Challenges of Investing in Autonomous Vehicles?

Robert Seidl

Robert Seidl, Motus Ventures

Panelist

Robert Seidl started his first company while still in high school a long time ago. For the past three decades, he has started a number of software companies, held engineering as well as marketing and executive leadership roles, and worked in well known Silicon Valley companies like Apple and Adobe following acquisitions. Robert has an engineering background, but has always focused on creating user friendly products. In 1995 he founded the company that built the first commercial dedicated web page editor and web site management tool: PageMill, quickly acquired by Adobe and generating over 20M in sales in its first year. In 2000, Metacreations Inc. acquired his next company, Canoma, whose technology allowed rapid creation of photorealistic, textured 3D models from images. You might also have seen these concepts in Google’s Building Maker software. Since 2002, he has managed Realtime Video Systems, a technology licensing company that provides US drone vendors with video processing and object tracking software. Meanwhile, he cofounded a MDV/Accel/Emergence/Walden funded startup called Genius.com which provides sales and marketing people at B2B companies with easy to use and realtime lead generation and nurturing tools.

Buttontwitter Buttonlinkedin

Quin Garcia

Quin Garcia, AutoTech Ventures

Panelist

Quin brings to Autotech a passion for forming ground transport startups, assembling teams of complimentary people, raising capital, and growing via partnerships with corporations. He was part of the founding team at Better Place, responsible for partnerships with automakers and auto parts suppliers while living in Israel, Japan, and China. Prior to that, he was a management consultant at Strategic Management Solutions, serving automaker and consumer electronics clients. While earning his MS degree in Management Science and Automotive Engineering from Stanford, he worked at Stanford’s Dynamic Design Lab developing autonomous vehicles. While earning his BS degree in Applied Economics and Management from Cornell, he was a leader of the Cornell Hybrid Electric Vehicle Team. Quin speaks Spanish and Chinese, and enjoys tennis, basketball, and driving racecars.

Buttontwitter Buttonlinkedin

Sudha Jamthe

Sudha Jamthe, Stanford University

Panelist

Sudha Jamthe is the author of "2030 The Driverless World: Business Transformation from Autonomous Vehicles" and 3 IoT books. She teaches IoT Business at Stanford Continuing studies and aspires to bring Cognitive IoT and Autonomous Vehicles together.

Buttontwitter Buttonlinkedin

Annie Lien

Annie Lien, Independent

Moderator

Annie Lien is a veteran Autonomous Driving expert, leader, advisor & public speaker. Annie has 12+ years experience in automotive R&D with a diverse background in technical strategy, business, product, marketing, PR/communications, government-related affairs, customer focus, user experience/human factors. While working on UX and Machine Learning projects at Robert Bosch's Silicon Valley research center Annie also moonlighted on the weekends as Team Leader of Team AnnieWay for DARPA Urban Challenge 2007. Founded by German robotics research professionals and graduate students, Team AnnieWay was an “underdog” team with very little money, manpower, resources and time, and against all odds, successfully qualified as a Finalist for the DARPA race. Thereafter at VW Group’s Silicon Valley innovation lab Annie co-created and led companywide initiatives on North American Product, PR, Legal, and Government activities for Audi Autonomous Driving. In recent years Annie briefly joined a computer vision startup; worked on her own startup; was CBDO & CMO at Perrone Robotics; and advises/consults entrepreneurs, investors, auto OEMs, suppliers, various tech and business entities.

Buttontwitter Buttonlinkedin

15:00

END OF SUMMIT

Connect

Be Sociable

  • Twitter
  • Facebook
  • Linkedin
  • Youtube
  • Flickr
  • Lanyrd
  • Instagram
  • Google plus
  • Medium