Laura Ruis
-
Laura is a postdoc at MIT working with Jacob Andreas. She completed her PhD at UCL DARK with Tim Rocktäschel and Ed Grefenstette. Her research focuses on how language models acquire and express reasoning abilities, with an emphasis on understanding what models learn from data and what capabilities can emerge from simple self-supervised objectives.
-
Emergence of reasoning in LMs. How do models acquire reasoning capabilities from language data and objectives like next-token prediction? What roles do different modalities (e.g. code) play? Why do some benchmarks improve sharply with scale?
LM agency and goal-directed behaviour. To what extent can we attribute goals or planning to language models? How can we detect, measure, or falsify these claims?
Generalisation and out-of-context reasoning. How do models make connections between prompts and seemingly unrelated training examples? Can we predict or detect out-of-context reasoning as reliably as in-context reasoning?
-
You may be a good fit if you have:
Are comfortable with experimental design and careful baseline tuning
A habit of questioning assumptions and validating claims with experiments.
Communicate research progress clearly, both in writing and discussion
Bonus skills:
Hands-on experience with LLMs (training, fine-tuning, evaluation, or research)
Familiarity with interpretability techniques
Experience working with large-scale compute or ML infrastructure
Postdoctoral Researcher, MIT