Philip Tomei

Research Director, AI Objectives Institute

  • Philip Moreira Tomei is Research Director at the AI Objectives Institute. He supervises three research streams: civilisational resilience and gradual disempowerment, on how AI systems operating within market incentives may progressively erode human agency; AI economics, on labor displacement, economic concentration, and the distribution of productive capacity; and governance with AI/supercoordination, building tools and frameworks for scaling collective decision-making, including Talk to the City, an open-source deliberation platform deployed across four continents. His work involves coordinating research across AI labs, government bodies, and academic institutions.

  • I'm interested in mentoring fellows on research at the intersection of AI safety, political economy, and instructional design. For this fellowship I am interested in default-path scenarios more so than tail-risks. Specific ongoing projects include:

    • Constructing a Gradual Disempowerment Index — Developing composite indicators that track erosion of human influence across economic, cultural, and governance domains. This involves identifying proxy metrics from existing public data, defining thresholds for meaningful change, and moving beyond AI exposure indices (which measure capability overlap) to track actual displacement and the erosion of meaningful decision-making authority.

    • Mapping the solution-space for human agency and societal resilience — Who is already working on solutions to civilizational resilience and gradual disempowerment whether or not they call it that? Systematically identifying existing and emerging research and crucially where the gaps are, across AI governance, labor economics, democratic theory, machine learning, sociology and institutional design that bears on preserving human agency. In order to directly inform partners in philanthropy and government to identify where research investment can have the most impact.

    • Post-AGI institutional design — What governance architectures could preserve meaningful human agency under conditions of radical AI capability? Drawing on polycentric governance (Ostrom), subsidiarity, mechanism design, and democratic theory, how political, technical, social and economic institutions would need to be redesigned. Successful work will be submitted to journals and presented at select conferences. 

  • Fellows should be comfortable working across disciplinary boundaries — the research sits at the intersection of AI safety, political economy, machine learning and empirical social science. We're looking for people with familiarity with at least one of AI governance, economics, institutional theory, cognitive science, complexity science or ML post-training (particularly RLHF and reward modeling), and the ability to engage seriously with both quantitative evidence and normative argument. Experience with systematic reviews, landscape analysis, or composite index methodology is a plus . Most importantly, we want fellows who can think structurally and rigorously about how systems change, not just what AI can do, but what happens to societies when it does.