MIT FutureTech – AI Risk Initiative

Simon Mylius, Alexander Saeri, Peter Slattery

  • MIT AI Risk Initiative aims to increase awareness and adoption of best practice AI risk management across the AI ecosystem.

  • Simon Mylius is a Chartered Engineer with over a decade of experience leading Systems Engineering teams in product development and system integration.

    He is now focused on AI Safety, applying Systems Engineering methodology to the Technical Governance of Artificial Intelligence. He recently completed the 2025 Winter Fellowship at the Centre for the Governance of AI, working on the application of Systems Theoretic Process Analysis (STPA) to frontier AI. He developed and leads the AI Incident Tracker - a classification tool and dashboard to add structure to datasets of reported AI safety incidents, which is now hosted by the MIT AI Risk Repository and its analysis features on AI Incident Database. He co-authored ‘Assessing confidence in frontier AI safety cases’ exploring probabilistic assessment methods and approaches to addressing argument ‘defeaters’.

    Alexander Saeri uses a mix of applied behaviour science and social science methods to understand and address complex challenges, including the governance of artificial intelligence. Alexander has expertise in implementation science, scale-up of effective interventions, group processes, systems thinking, and socio-technical transitions, and has extensive experience as a research consultant and facilitator. He holds a PhD in Social Psychology from the University of Queensland in Australia.

    Peter Slattery is a Researcher at MIT FutureTech, where he leads research to explore i) the risks from artificial intelligence, ii) their importance, and iii) how organizations are responding to these risks. He leads the AI Risk Repository project. He is experienced with a broad range of qualitative and quantitative research techniques, including literature reviews, conceptual papers, interviews, experiments, surveys, and structural equation modelling. He received his PhD in Information Systems from the University of New South Wales in Australia.

  • Revise AI Risk taxonomy

    We are looking for a research fellow to help revise the AI Risk Repository taxonomy so it is more accessible, useful, and up to date. The role involves analyzing qualitative feedback from experts, comparing our current taxonomy with other leading public taxonomies, interviewing team members about how they use it, and identifying any data structure changes needed to support revisions. The fellow would develop recommendations, create mock-ups of revised taxonomies, and help communicate proposed changes through reports or blog posts. This would suit someone with experience in taxonomies, ontologies, or structured data, alongside broad knowledge of AI risks, strong qualitative research skills, and the ability to translate complex ideas into clear written and visual outputs.

    User Research on AI Incident Databases

    We are looking for a research fellow to lead a user research project on the AI incident database ecosystem, including resources such as the AI Incident Database, the MIT AI Risk Repository Incident Tracker, and the OECD AI Incident Monitor. The role involves mapping the landscape of existing databases, designing and running semi-structured interviews with a diverse set of users, analyzing interview data to identify common needs and pain points, and developing practical recommendations for how incident data should be analyzed, visualized, and presented. The fellow would produce a reusable interview protocol, a research report with prioritized recommendations, and an executive summary for maintainers, funders, and policymakers. This would suit someone with strong experience in user or survey research, qualitative and mixed-methods analysis, and familiarity with AI governance or related policy and technical fields. A PhD or equivalent research experience would be valuable but is not essential.

  • Ideal candidates would have:

    • Strong experience in literature review and qualitative synthesis.

    • Familiarity with AI governance or adjacent policy/technical fields.