Semantic descriptors for 3D object understanding in AR
We present a method for jointly analyzing images, 3D objects and text, to generate a unified semantic descriptor that captures the shape and class of objects from color images. This creates an AI agent that predicts semantic and geometric scene data from the physical world as humans do. We show how such a system is used at Selerio to pull physical objects into the virtual world for Augmented Reality applications.
Flora is co-founder of the AR startup Selerio, and a recent Ph.D. graduate in Computer Vision at the University of Cambridge. Her publications in several top-tier venues cover topics at the intersection of Graphics, Vision, and NLP such as sketch-based modeling or joint analysis of images and text for 3D retrieval. She also holds the 2014 Google European Doctoral Fellowship in Computer Graphics for her work on retrieving 3D models using images and sketches. Selerio, a Cambridge spin-out, builds on this work to provide developers with live 3D reconstruction and editing of real scenes, for more engaging AR experiences. Selerio is backed by investors such as Entrepreneur First and Betaworks.