Aaron Scher
-
Aaron’s research at MIRI is focused on International Coordination on AI, with an emphasis on Verification Mechanisms.
He earned his Bachelor’s degree in psychology from Pitzer College in 2022 and quickly transitioned to working in AI safety. After a year of up-skilling and helping grow the field, Aaron started doing independent AI alignment research with the MATS program in summer 2023. He later managed 4 research teams through SPAR, the Supervised Program in Alignment Research working on sycophancy and interpretability. Aaron joined the Technical Governance Team in July 2024.
-
Example projects I would be interested in mentoring:
Write a survey of approaches to running LLMs on consumer hardware. Give an overview of the methods used and performance achieved from tools like LlamaCPP. This slots into section 2.3.3 from this report and is relevant to the feasibility of compute governance under different conditions.
Review historical case studies of getting experts and scientists to avoid certain research? This slots into section 2.3.6 from this report and is relevant to governing personnel and research.
Figure out how AI models could be developed in a hardware-dependent way. For instance, the most extreme case would be a chip that is fabbed with the model weights directly on it, only carrying out those operations. Other options might involve TPMs, encyption, and more. This slots into section 2.3.3 from this report and is relevant to AI model weight security, non-proliferation, and AI verification.
Build a website serving as a leaderboard for new LLMs on "alignment benchmarks" (by which I mean the closest things we currently have to alignment benchmarks).
Run capability evaluations on Chinese LLMs. Be the person with correct takes about a model's capability level within 24 hours of it being released.
This report (https://techgov.intelligence.org/research/ai-governance-to-avoid-extinction) contains around 400 questions, many of which I would be excited to mentor people working on!
-
Interest in pursuing AI governance or policy professionally
Experience reading AI papers (e.g., at least 12 total hours spent reading papers on arxiv, at least 10 papers)
Completion of AI Safety Fundamentals or an equivalent intro to AI safety/alignment course, could be an AI governance course
Comfortability pursuing self-directed research
Researcher, Machine Intelligence Research Institute