Robot Learning via Human Adversarial Games
Much work in robotics has focused on “human-in-the-loop” learning techniques that improve the efficiency of the learning process. However, these algorithms have made the strong assumption of a cooperating human that assists the robot. In reality, people tend to act also in an adversarial manner towards deployed robotic systems. We show that this can in fact improve the robustness of the learned models by proposing a physical framework that leverages perturbations applied by a human adversary, guiding the robot towards more robust models. In a manipulation task, we show that grasping success improves significantly when the robot trains with a human adversary as compared to training in a self-supervised manner. This work opens a range of exciting potential applications in other domains as well, such as in autonomous driving.
Stefanos Nikolaidis is an Assistant Professor of Computer Science at the University of Southern California, where he directs the Interactive and Collaborative Autonomous Robotic Systems (ICAROS) Lab. Research in ICAROS spans the whole spectrum of human-robot interaction science: from distilling the fundamental mathematical principles that govern interactive behaviors, to developing approximation algorithms for deployed robotic systems and testing them "in the wild" with actual end users. Previously, Stefanos completed his PhD at Carnegie Mellon's Robotics Institute and received his MS from MIT. He has also a MEng from the University of Tokyo and a BS from the National Technical University of Athens. Stefanos has worked as a research associate at the University of Washington, as a research specialist at MIT and as a researcher at Square Enix in Tokyo. He has received a Best Enabling Technologies Paper Award from the IEEE/ACM International Conference on Human-Robot Interaction in 2015, a best paper nomination from the same conference in 2018 and was a best paper award finalist in the International Symposium on Robotics 2013.