Puneet Dokania

Continual Learning: Quirks and Assumptions

In this talk, I'll present our very recent work called GDumb (ECCV2020, Oral) and discuss various quirks and assumptions encoded in recently proposed approaches for continual learning (CL). We argue that some oversimplify the problem to an extent that leaves it with very little practical importance, and makes it extremely easy to perform well on. To validate this, we propose GDumb that (1) greedily stores samples in memory as they come and; (2) at test time, trains a model from scratch using samples only in the memory. We show that even though GDumb is not specifically designed for CL problems, it obtains state-of-the-art accuracies (often with large margins) in almost all the experiments when compared to a multitude of recently proposed algorithms. Surprisingly, it outperforms approaches in CL formulations for which they were specifically designed. This, we believe, raises concerns regarding our progress in CL for classification. Overall, we hope our formulation, characterizations and discussions will help in designing realistically useful CL algorithms, and GDumb will serve as a strong contender for the same.

3 Key Takeaways: 1. Continual learning (CL) is important and will become extremely crucial for industries such as Google/Facebook training their models on billions of samples using hundreds of GPUs and weeks of training.

  1. Even though CL is important, there does not exist a very practical use case of CL.

  2. Time and space constraints based experiment design and evaluation for CL needs rethinking and better formulations.

Puneet holds two research positions, one as a senior researcher in machine learning and computer vision at the Torr Vision Group (University of Oxford) and another as a principal researcher at an amazing startup based in Cambridge (U.K.) called Five AI. He obtained his PhD from INRIA and Ecole Centrale Paris in France in 2016 after which he moved to Oxford as a postdoctoral researcher. Puneet's research theme revolves around developing "reliable and efficient algorithms with natural intelligence using deep learning". Primarily, his current focus is on topics like continual learning, robustness, calibration, and parameter quantization.

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more