As the race to create the first fully autonomous, road-safe vehicle continues, numerous road-blocks and challenges are pushing companies to find new innovative solutions.
Successful development of autonomous vehicles requires a wide variety of disciplines, and some experts believe that a completely open-source autonomous car, that anyone can test models on, would be best poised to push the frontiers of the technology.
Jia Qing Yap is Executive Lead & Deep Learning Engineer at OpenSourceSDC
, where they are building an open-source self-driving car in Singapore. Their vision is to be the Android-equivalent of an end-to-end full autonomy model which any vehicle manufacturer can then build upon and use. At the Deep Learning Summit
in Singapore on 27-28 April
, Jia will share expertise on the use of neural networks, reinforcement learning and computer vision in autonomous vehicle development.
I spoke to Jia ahead of the summit next week
to learn more about what we can expect next in smart transport and deep learning.
How did you begin your work in deep learning?
I started looking into deep learning when Richard Socher publicly released his Stanford course materials on Deep Learning for Natural Language Processing, and NUS Prof Kan Min-Yen opened up a PHD class he was running using those materials (with Socher’s permission) to the public. I was fascinated by the improvements Recurrent Neural Networks and Recursive Neural Tensor Networks (for sentiment analysis) brought, over the traditional bag-of-words natural language processing methods.
I later got interested in applying deep learning to self-driving cars, and met several PHDs in machine learning, computational neuroscience and robotics who share the same interest, along with students of Udacity’s self-driving car engineering nanodegree and the wider Singapore Self-driving Car Engineering Meetup community I started. We came together to start OpenSourceSDC, with the vision of building the Android-equivalent of a full autonomy self-driving car model which any vehicle manufacturer can build upon and use. In OpenSourceSDC, we are interested in the applications of all kinds of deep learning architectures to the problem of autonomous driving, from the perception problem to even path planning, as we believe that it is by bringing in techniques from outside the field, into it, that we can achieve breakthrough innovations.
How essential are advances in CNNs to progressing the autonomous driving sector?
Convolutional Neural Networks (CNNs) have made significant contributions to solving the Perception problem – use cases of CNNs for autonomous driving include traffic sign and traffic light classification, vehicle detection, image segmentation and even end-to-end models mapping image input to steering angles.
I’m also incredibly excited about the potential for stacking CNNs with other deep learning architectures like Recurrent Neural Networks, to take into account the temporal nature of driving data like steering angles.
Computer vision is now almost a solved problem due to the feature learning contribution of CNNs. Previously, computer vision engineers had to hardcode for features in the image to be detected by their models, or depend on certain heuristics in defining their models, which are not robust to the near infinite scenarios a model could be presented with in autonomous driving.
What role will reinforcement learning play in autonomous driving?
Reinforcement Learning requires a lot of experimentation by the model in order to achieve performance that can match humans – allowing self-driving cars to do that in the real world would be incredibly dangerous. In order for reinforcement learning to be successfully applied, we need to first be able to create a simulation environment representative enough of real-life, for the model to learn in.
That aside, there is much promise in Deep Reinforcement Learning where there have been efforts to combine deep learning models with reinforcement learning – the landmark example being AlphaGo by DeepMind which combined a Convolutional Neural Network with Reinforcement Learning to beat Lee Sedol, the world champion in the game of Go.
What developments can we expect to see in deep learning in the next 5 years?
I don’t want to speculate too much, but with the deepening of networks we might find alternative ways of updating the network parameters, instead of using backpropogation – I know Prof Shaowei Lin from SUTD's Brain Lab was looking into Deep Probability Flow. We will probably see further abstraction of deep learning code to make it even more accessible for any software engineer to implement. Hopefully, we will also see further efforts to open up the black-box of deep learning. In addition, I believe we will have increasingly customized hardware for deep learning, with the recent release of details about Google’s Tensor Processing Unit.
Outside of your own field, what area of deep learning advancements excites you most?
I’m excited about Generative Adversarial Neural Networks!
Can't make it to Singapore? The Deep Learning Summit will also be held in Boston on 25-26 May, London on 21-22 September, and Montreal on 12-13 October. View all upcoming events here.Opinions expressed in this interview may not represent the views of RE•WORK. As a result some opinions may even go against the views of RE•WORK but are posted in order to encourage debate and well-rounded knowledge sharing, and to allow alternate views to be presented to our community.
|There's just over 1 week to go until the Deep Learning Summit in Singapore! The summit will take place alongside the Deep Learning in Finance Summit on 27-28 April.|
Other speakers include Jeffrey de Fauw, Research Engineer, DeepMind; Anuroop Sriram, Research Scientist, Baidu Silicon Valley AI Lab; Jun Yang, Algorithm Architect, Alibaba; Hima Patel, Analytics Manager, VISA; Nicolas Papernot, Google PhD Fellow, Penn State University; Vikramank Singh, Software Engineer, Facebook; and Brian Cheung, PhD Student, UC Berkeley. View more details here.Tickets are limited for this event. Register your place now.