About the Ponce Laboratory

How do brain areas interact to create the perception of vision?

The goal of our lab is to define how neurons from different cortical areas interact to realize our perception of shape and motion. We study this question in the brain of the rhesus macaque, recording action potentials from neurons that span the entire cortico-visual hierarchy, from V1, V2, V4, MT and inferotemporal cortex (IT). We believe that the best explanation for visual processing is mathematical – thus we work to ensure that all of our results can be implemented in computational models like deep neural networks.

To achieve this goal, we need animals to perform behavioral tasks, and so we use modern techniques (including computer-based automated systems) to train the animals humanely and efficiently. We record from their brains using chronically implanted microelectrode arrays, which yield large amounts of data quickly, and sometimes also using single electrodes for novel exploratory projects (i.e. our moonshot division!). While recording, we also can use activity manipulation techniques (like cortical cooling, optogenetics and chemogenetics) to affect cortical inputs to the neurons under study, and establish results that are causal, not just correlational.

Our experimental work is influenced by machine learning. We use a variety of deep neural network types (including convolutional, recurrent and generative adversarial) to test preliminary hypotheses, interpret results and generate interesting stimuli for biology-based experiments. Our programming languages of choice are Matlab and Python.

Solving the problem of visual recognition at the intersection of visual neuroscience and machine learning will yield applications that will improve automated visual recognition in fields like medical imaging, security and self-driving vehicles. But just as importantly, it will illuminate how our inner experience of the visual world comes to be.