Robin Kips

Realistic Cosmetics Virtual Try-On Using GANs

The ability of generative models to synthesize realistic images offer new perspectives for cosmetics virtual try-on applications. We propose a new formulation for the makeup style transfer task, with the objective to learn a color controllable makeup style synthesis. We introduce CA-GAN, a generative model that learns to modify the color of specific objects (e.g. lips or eyes) in the image to an arbitrary target color while preserving the background. Since color labels are rare and costly to acquire, our method leverages weakly supervised learning for conditional GANs. This enables to realistically simulate and transfer various makeup styles

Robin KIPS is a Research Scientist in the Artificial Intelligence department of L’Oréal Research and Innovation in France. His research focus on GANs, neural rendering, and color based computer vision problems. Robin is currently pursuing a Ph.D. at Télécom Paris, working on how to bring new perspectives to virtual try on technologies using generative models.

Key Takeaways:

o Realistic and controllable generative models can be trained without labelled data using weak supervision.

o Generative model can implicitly learn to process complex phenomenon such as specularities in a realistic way.

o Controllable generative models are good candidates for the future of AR applications

Buttontwitter Buttonlinkedin
This website uses cookies to ensure you get the best experience. Learn more