Conditional Convolutional Neural Network for Modality-aware Face Recognition
We propose a conditional Convolutional Neural Network, named as c-CNN, to handle multimodal face recognition. Different from traditional CNN that adopts fixed convolution kernels, samples in c-CNN are processed with dynamically activated sets of kernels. The activations of convolution kernels in a certain layer are conditioned on its present intermediate representation and the activation status in the lower layers. The activated kernels across layers define the sample-specific adaptive routes that reveal the distribution of underlying modalities. The proposed method is implemented via incorporating the binary decision tree, which is evaluated for multimodality face recognition problems.
T-K Kim is Associate Professor and the leader of Computer Vision and Learning Lab at Imperial College London, UK. He obtained his PhD and Junior Research Fellowship, from Univ. of Cambridge, in 2008 and for 2007-2010 respectively. His research interests primarily lie in tree-structure classifiers for articulated hand pose estimation, face recognition by image sets and videos, and 6D object pose estimation. He has co-authored over 40 papers in top-tier conferences and journals, his co-authored algorithm for face image retrieval is MPEG-7 ISO/IEC standard. He is co-recipient of the KUKA best paper award at ICRA14, and general co-chair of CVPR15/16 workshop on HANDS and ICCV15 workshop on Object Pose.