MIT FutureTech – AI Risk Initiative
Simon Mylius, Alexander Saeri, Peter Slattery
-
MIT AI Risk Initiative aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem.
-
Simon Mylius is a Chartered Engineer with over a decade of experience leading Systems Engineering teams in product development and system integration.
He is now focused on AI Safety, applying Systems Engineering methodology to the Technical Governance of Artificial Intelligence. He recently completed the 2025 Winter Fellowship at the Centre for the Governance of AI, working on the application of Systems Theoretic Process Analysis (STPA) to frontier AI. He developed and leads the AI Incident Tracker - a classification tool and dashboard to add structure to datasets of reported AI safety incidents, which is now hosted by the MIT AI Risk Repository and its analysis features on AI Incident Database. He co-authored ‘Assessing confidence in frontier AI safety cases’ exploring probabilistic assessment methods and approaches to addressing argument ‘defeaters’.
Alexander Saeri uses a mix of applied behaviour science and social science methods to understand and address complex challenges, including the governance of artificial intelligence. Alexander has expertise in implementation science, scale-up of effective interventions, group processes, systems thinking, and socio-technical transitions, and has extensive experience as a research consultant and facilitator. He holds a PhD in Social Psychology from the University of Queensland in Australia.
Peter Slattery is a Researcher at MIT FutureTech, where he leads research to explore i) the risks from artificial intelligence, ii) their importance, and iii) how organizations are responding to these risks. He leads the AI Risk Repository project. He is experienced with a broad range of qualitative and quantitative research techniques, including literature reviews, conceptual papers, interviews, experiments, surveys, and structural equation modelling. He received his PhD in Information Systems from the University of New South Wales in Australia.
-
Systematic review of AI risk mitigations.
The fellow may help conduct a systematic review of the existing research on AI risk mitigations, contributing to an ArXiv report planned for early 2026. This work will expand and improve our current mitigation taxonomy and build a shared language for mitigations to risks from AI. Possible tasks include: screening and selecting relevant papers; extracting mitigations into a structured database; classifying mitigations using and iteratively improving our taxonomy; integrating feedback from a large expert author team (>50 authors); helping write the paper; and potentially leading or assisting with LLM-based automation of extraction and classification. The intended outputs include a public taxonomy of AI risk mitigations, a database of mitigations, and a related webpage and visualizations.Systematic document review of organizational responses to AI risks.
The fellow may help review public documents from more than 200 organizations (including companies, governments, and other AI actors) to catalogue how they manage AI risks and where gaps remain, feeding into a paper and report planned for early 2026. Possible tasks include: searching for, screening, and coding relevant documents under supervision; helping design and use AI and other tooling to speed up search, extraction, and coding while maintaining rigor; and contributing to blogs, visualizations, and other external communications about the work.
For more, check here: https://airisk.mit.edu/
-
Ideal candidates would have:
Strong experience in literature review and qualitative synthesis.
Familiarity with AI governance or adjacent policy/technical fields.