Fellowship Information
Dates & Location:
June 23 – August 17, 2025
Harvard Square, Cambridge, MA
About the Fellowship
The application period is now closed. We sincerely appreciate your interest.
The Cambridge Boston Alignment Initiative Summer Research Fellowship is an intensive, fully-funded, eight-week research program hosted in Cambridge, Massachusetts. It is designed to support talented researchers aiming to advance their careers in AI safety — covering both technical and governance domains. Fellows work closely with established mentors, participate in engaging workshops and seminars, and gain invaluable research experience and networking opportunities within the vibrant AI safety community at institutions like Harvard, MIT, and leading AI safety organizations.
Note: Unfortunately, we are unable to sponsor work visas for this program. But we accept OPTs if you are an international student in the US.
If you refer a fellow to us, you can get a $100 Amazon gift card.
What We Offer
Stipend: $8,000 over the two-month fellowship.
Accommodation: We’re working on arranging housing; a generous housing stipend offered if direct arrangements aren't possible.
Meals: Free weekday lunches and dinners, plus snacks, coffee, and beverages.
Dedicated Workspace: 24/7 office access in Harvard Square, near Harvard University; a few minutes from Harvard Yard and Charles River.
Mentorship: Weekly individual mentorship (1–2 hrs/week) from researchers at renowned institutions such as Harvard, MIT, Northeastern, FAR.AI, Redwood Research, MIRI, Center for AI Safety, Anthropic, Google Deepmind, and the UK AI Safety Institute.
Professional Development: Dedicated research managers providing strategic support to strengthen your research skills and career trajectory in AI safety and governance.
Networking and Community: Events, workshops, socials with Harvard and MIT AI safety groups, and guest speakers.
Who Should Apply?
We welcome applications from anyone deeply committed to advancing the safety and responsible governance of artificial intelligence. Ideal candidates include:
Undergraduate, Master's, and PhD students looking to explore or deepen their engagement in AI safety research.
Early-career professionals or researchers aiming to transition into AI safety or governance.
Individuals who are passionate about addressing the risks associated with advanced AI systems.
Note: Participants must be 18 years or older to apply for this fellowship.
We highly recommend reviewing each mentor’s profile before submitting your application.
Application Process
Our application process consists of three steps:
Initial review
15-minute interview with CBAI
Interview or test task assigned by the mentor
We will review applications on a rolling basis. Please apply at your earliest convenience.
Our Research Tracks
Technical AI Safety: Research focused on reducing catastrophic risks from advanced AI by developing alignment strategies, interpretability techniques, and robustness measures.
AI Governance: Creating policy frameworks, institutional designs, and strategic interventions aimed at managing and mitigating existential risks posed by powerful AI systems.
Technical Governance: Research at the intersection of technical AI alignment and policy, including compute governance, model evaluations, and institutional mechanisms to ensure advanced AI systems remain safe and controllable.
Fellows will produce impactful research outputs, which could include academic publications, policy briefs, blogs, technical demonstrations, or presentations.
For more information and your questions, please reach out to emre@cbai.ai