research

about my current research directions and working with me

The overarching aim of my research programme is to understand the computational principles of neural information processing, focussing on how brains and artificial neural networks understand language and make sense of the world. Currently, I am pre-occupied with the following three research directions.

Studying language understanding in and with Large Language Models (LLMs)

Large language models (LLMs) now broadly approach human-level performance on many language tasks, like summarisation or question answering. Moreover, LLMs are also the best available encoding models for predicting brain responses to linguistic stimuli. A major aim of my current work is to understand the reasons behind their impressive performance, on both domains. Do LLMs “understand” language – and if so, is their understanding in any way like that of humans? Can we dissect their representations into human-interpretable components, like syntax and meaning, or world and agent models? And can we use such dissections to better understand what is driving the alignment between LLM representations and brain responses to language?

Generative AI for strong and precise tests of the predictive brain hypothesis

A prominent idea in cognitive science is that the brain is a ‘prediction machine’, constantly comparing incoming signals to internal predictions. Advances in AI are allowing for new and better ways to test this hypothesis: using generative AI models to approximate the predictions the brain might be making, and comparing these to brain responses. Previously, I used this approach – which combines deep generative modelling with neural data science – to study prediction in language and music processing. Currently, I am expanding the framework to ask further questions. For instance, do the predictions we find in language generalise to other domains, like vision? At what level of abstraction does the brain predict? And what information is driving these predictions?

Computational function of recurrence and feedback in vision

Visual processing consists of two stages: a ‘feedforward’ stage in which information is projected from early to later brain areas, followed by a ‘recurrent’ stage in which information locally cycles and is fed back from later to early areas. While much progress is made on the feedforward stage, recurrence and feedback remain poorly understood. I try to leverage recent advances AI and neuroscience to make progress on this question. I aim firstly to develop new empirical tests, linking hypothesised functions of recurrence to empirical markers of recurrence in high quality datasets of visual cortex. And secondly to better understand the computational mechanisms of recurrence, via in-silico analyses of DNNs equipped with different types of recurrence.


Working with me. If you are a student interested in working on anything related to the above, do get in touch. Note that it is not necessary to have a formal background in computational modelling or AI. It is, however, important to have experience with, and an affinity for, programming in Python. If you do not yet have such experience, don’t worry – there are amazing online resources that allow you to learn these skills independently. I have heard particularly good stories about DataCamp and Udemy.