I am an AI research scientist at FAIR working on code generation and reasoning in LLMs.
At some point, I’ll probably get a PhD from the University of Oxford, where I have been working on data-efficiency and uncertainties in language and vision models. My supervisors are Yarin Gal in OATML and Tom Rainforth in RainML@OxCSML.
I’ve worked on a bunch of different things, including detecting hallucinations in LLMs, better understanding in-context learning in LLMs, predicting hallucinations from LLM hidden states, contrastive vision-language models, active model evaluation (twice), non-parametric transformers, multimodal active feature acquisition, and object-structured world models.
Our research on detecting hallucinations in LLMs was published in Nature and discussed in Time Magazine, The Economist, Science, The Independent, and The Washington Post.