The Fovea as an Emergent Property of Visual Attention
Neural attention has been applied successfully to a variety of different applications including natural language processing, vision, and memory. An attractive aspect of these neural models is their ability to extract relevant features from data with minimal feature engineering. We further extend this ability to learning interpretable structural features of the attention window itself. We describe a learnable retinal sampling lattice similar to the retinal ganglion cells present in the primate retina. We explore the emergent properties of this lattice after training and find connections to features found in the physiology. Furthermore, we find conditions where these emergent properties are amplified or eliminated providing clues to their function.
Brian Cheung is a PhD Student at UC Berkeley working with Professor Bruno Olshausen at the Redwood Center for Theoretical Neuroscience. His research interests lie at the intersection between machine learning and neuroscience. Drawing inspiration from these fields, he hopes to create systems which can solve complex vision tasks using attention and memory.