I am interested in the relationship between cortical anatomy, neural encoding, and visual perception; how the brain translates visual signals into an internal representation of visual space, which forms the basis of how we perceive and interact with our visual environment. My research blends neuroimaging (fMRI), psychophysics, and computational modeling. Broadly, my research targets the following questions:
- How are retinotopic maps, particularly primary visual cortex (V1), organised?
- How are fundamental visual dimensions (i.e., contrast, spatial and temporal frequency) encoded within these retinotopic maps? Does this encoding vary with visual field location? And how does neural encoding impact visual perception?
- Why is there so much variability in the size of V1 when comparing between individuals? What does this variability mean for individual differences in visual perception?
Please feel free to email me if you need any advice on collecting or analysing retinotopy data, etc.
Google Scholar

