Fellowship Information

Dates & Location:
February 2 – April 10, 2025
Harvard Square, Cambridge, MA

About the Fellowship

The Cambridge Boston Alignment Initiative Spring Research Fellowship is an intensive, fully-funded, ten-week research program hosted in Cambridge, Massachusetts. It is designed to support talented researchers aiming to advance their careers in AI safety — covering both technical and governance domains. Fellows work closely with established mentors, participate in engaging workshops and seminars, and gain invaluable research experience and networking opportunities within the vibrant AI safety community at institutions like Harvard, MIT, and leading AI safety organizations.

Our inaugural fellowship cohort members have joined Goodfire and Constellation, established their own research group, been accepted into NeurIPS and ICLR, and shared their reports with policymakers in DC.

We host a speaker event series under the fellowship program. Some speakers in the summer cohort were:

  • Michael Aird — RAND TASP

  • Neel Alex — University of Cambridge

  • David Bau — Northeastern University

  • Joe Carlsmith — Open Phil

  • Joshua Clymer — Redwood Research

  • Raymond Douglas — Telic Research

  • Sara Fish — Harvard

  • Hans Gundlach — MIT FutureTech

  • Jared Leibowich — Samotsvety

  • Trevor Levin — Open Phil

  • Jayson Lynch — MIT FutureTech

  • Samuel Marks — Anthropic

  • Max Nadeau — Open Phil

  • Aruna Sankaranarayanan — MIT

  • Ekdeep Lubana Singh — Goodfire

  • Stewy Slocum — MIT

  • Cristian Trout — Artificial Intelligence Underwriting Company

  • Kevin Wei — UK AISI

Note: We accept OPT & CPT if you are an international student in the US. But we are unable to sponsor visas for this program.

If you refer a fellow to us, you can get a $100 Amazon gift card.

What We Offer

Stipend: $8,000 over the ten-week fellowship.

Accommodation: For participants coming from outside the Boston Metropolitan Area, we will arrange housing for the entire program.

Meals: Free weekday lunches and dinners, plus snacks, coffee, and beverages.

Dedicated Workspace: 24/7 office access in Harvard Square; a few minutes from Harvard Yard and Charles River.

Mentorship: Weekly individual mentorship (1–2 hrs/week) from researchers at renowned institutions such as Harvard, MIT, Google DeepMind, the UK AI Security Institute, Institute for Progress, Center for a New American Study, and others.

Professional Development: Dedicated in-house research managers providing high-touch research support to strengthen your research skills, clarify the research direction, and contribute to your career trajectory in AI safety research.

Networking and Community: Weekly speaker event series with renowned researchers in the field, and events, workshops, socials with Harvard & MIT AI safety groups, as well as the greater public.

Compute Support: We will provide compute support for up to $10,000 to each fellow in the form of API credits and on-demand GPUs.

Extension Fellowship: For those interested in continuing their research, we offer an extension program up to 4 months. Approximately 50% of the fellows in our inaugural cycle have received extension funding for 1-4 months.

Who Should Apply?

We welcome applications from anyone deeply committed to advancing the safety and responsible governance of artificial intelligence. Ideal candidates include:

  • Undergraduate, Master's, and PhD students — and Postdocs — looking to explore or deepen their engagement in AI safety research.

  • Early-career professionals or researchers aiming to transition into technical or governance.

  • Individuals who are passionate about addressing the risks associated with advanced AI systems.

Note: Participants must be 18 years or older to apply for this fellowship.

We highly recommend reviewing each mentor’s profile before submitting your application.

Application Process

Our application process consists of four steps:

  1. General application form

  2. Mentor-specific question, test task, or code screening (if applicable)

  3. 15-minute interview with CBAI

  4. An interview with the mentor

We will review applications on a rolling basis. Please apply at your earliest convenience.

Our Research Tracks

Technical AI Safety: Research focused on reducing catastrophic risks from advanced AI by developing alignment strategies, interpretability techniques, and robustness measures.

AI Governance: Creating policy frameworks, institutional designs, and strategic interventions aimed at managing and mitigating existential risks posed by powerful AI systems.

Technical Governance: Research at the intersection of technical AI alignment and policy, including compute governance, model evaluations, and institutional mechanisms to ensure advanced AI systems remain safe and controllable.

Fellows will produce impactful research outputs, which could include academic publications, policy briefs, blogs, technical demonstrations, or presentations.

For more information and your questions, please reach out to emre@cbai.ai

Apply by December 14th!