The large amount of multi-sensory data available for autonomous intelligent systems is just astounding. The power of deep architectures to model these practically unlimited datasets is limited by only two factors: computational resources and labels for supervised learning. While horizontal scalability of training is still being improved, computational resources are just a Cap-Ex issue. I argue the need for accurate labels is more than a Cap-Ex problem, as it requires careful interpretation of what to label and how, especially in complex and multi-sensory settings. At the risk of stating the obvious, we just want unsupervised learning to work for everything we do, right now. While this has been a "want" of the AI/Machine-Learning community for quite some time, unsupervised learning just made an impressive leap during the last year. I will discuss the latest breakthroughs and highlight the massive potential for autonomous systems, as well as present latest results from our team.
My belief is that intelligent software, based on AI / Machine-Learning, will take over the world, and that Autonomous-Systems, made up of autonomous agents, vehicles, robots and drones, are just around the corner. My focus is to give these machines a human touch and give humans access to their raw power without the hurdles of talking their language. I like to start from real user problems and leverage machine-learning to design solutions that tie software, hardware and sensors and achieve an high degree of autonomy as well as a high level of usability and satisfaction.