Cross-modal Understanding and Prediction for Cognitive Robots
Our brain works as a predictive machine, which works constantly as an active inference and keeps utilizing both active motor action and perception to minimize the prediction error. At the first part of the talk, I will briefly introduce this hypothesis in neuroscience and psychology and proposed a hierarchical predictive cognitive model for robots. At the second part, I will present how to apply this cognitive model to crossmodal understanding and prediction for robots using the state-of-the-art machine learning algorithms. A few robot video demonstrations based on these methods will also be shown.
Junpei “Joni” Zhong is currently a research scientist at National Institute of Advanced Industrial Science and Technology (AIST), Tokyo, Japan. He received BEng from the South China University of Technology in 2006, M.Phil from the Hong Kong Polytechnic University in 2010 and Ph.D. ("with great distinction") from the University of Hamburg in 2015. From 2014 to 2018, he has been participating in a few projects in Germany, UK, and Japan. His research interests are machine intelligence, machine learning, cognitive robotics and assistive robotics. He has been awarded the EU Marie-Curie Fellowship from 2010 to 2013. He is also a founding member of the organization "Consciousness Research Network" and a Guest Editor of journal "Complexity" and "Interaction Studies".