Peter Slattery & Alexander Saeri

  • Peter Slattery is a Researcher at MIT FutureTech, where he leads research to explore i) the risks from artificial intelligence, ii) their importance, and iii) how organizations are responding to these risks. He leads the AI Risk Repository project. He is experienced with a broad range of qualitative and quantitative research techniques, including literature reviews, conceptual papers, interviews, experiments, surveys, and structural equation modelling. He received his PhD in Information Systems from the University of New South Wales in Australia.

    Alexander uses a mix of applied behaviour science and social science methods to understand and address complex challenges, including the governance of artificial intelligence.

    Alexander has expertise in implementation science, scale-up of effective interventions, group processes, systems thinking, and socio-technical transitions, and has extensive experience as a research consultant and facilitator. He holds a PhD in Social Psychology from the University of Queensland in Australia.

    • Systematic review of AI risk mitigations.
      The fellow may help conduct a systematic review of the existing research on AI risk mitigations, contributing to an ArXiv report planned for early 2026. This work will expand and improve our current mitigation taxonomy and build a shared language for mitigations to risks from AI. Possible tasks include: screening and selecting relevant papers; extracting mitigations into a structured database; classifying mitigations using and iteratively improving our taxonomy; integrating feedback from a large expert author team (>50 authors); helping write the paper; and potentially leading or assisting with LLM-based automation of extraction and classification. The intended outputs include a public taxonomy of AI risk mitigations, a database of mitigations, and a related webpage and visualizations.

    • Systematic document review of organizational responses to AI risks.
      The fellow may help review public documents from more than 200 organizations (including companies, governments, and other AI actors) to catalogue how they manage AI risks and where gaps remain, feeding into a paper and report planned for early 2026. Possible tasks include: searching for, screening, and coding relevant documents under supervision; helping design and use AI and other tooling to speed up search, extraction, and coding while maintaining rigor; and contributing to blogs, visualizations, and other external communications about the work. 

    For more, check here: https://airisk.mit.edu/

  • Ideal candidates would have:

    • Strong experience in literature review and qualitative synthesis.

    • Familiarity with AI governance or adjacent policy/technical fields.

    • A PhD (or equivalent experience) is preferred (but not essential)

Lead, MIT AI Risk Repository; Director, MIT AI Risk Initiative