Qiang Huang

Chevron down

Synthesis of Images by Two-Stage Generative Adversarial Networks

We proposed a divide-and-conquer approach using two generative adversarial networks (GANs) to explore how a machine can draw color pictures (bird) using a small amount of training data. In our work, we simulated the procedure of an artist drawing a picture, where one begins with drawing objects’ contours and edges and then paints them different colors. We adopted two GAN models to process basic visual features including shape, texture and color. We used the first GAN model to generate object shape, and then painted the black and white image based on the knowledge learned using the second GAN model. We ran our experiments on 600 color images. The experimental results showed that the use of our approach can generate good quality synthetic images, comparable to real ones.

Dr. Qiang Huang is now working as a senior researcher in Centre for Vision, Speech and Signal Processing (CVSSP) at the University of Surrey. In the last twelve years, he has worked in several fields, speech recognition, speech understanding, natural language processing, information retrieval, and audio/vision processing for sport video analysis, and has developed several systems for intelligent call routing, interactive information retrieval, tennis game analysis, and user-based dialogue system using audio, visual and text information. His research is now focused on multimodal information processing using deep neural networks.

Buttontwitter
This website uses cookies to ensure you get the best experience. Learn more