SecureBio
-
SecureBio is a biosecurity nonprofit dedicated to safeguarding biotechnology and preventing catastrophic biological risks. Our AI team develops rigorous benchmarks and evaluation frameworks to assess AI systems' biological capabilities, as well as mitigation strategies that can reduce risks once AI capabilities cross specific risk thresholds. Our tools are used by frontier AI labs, and our work has informed national security briefings and emerging governance standards.
-
More information soon.
-
Potential research directions for fellows working with SecureBio may include:
AI biosecurity benchmark design and evaluation methodology
Agentic AI systems and their dual-use biological capabilities (e.g. multi-step computational design workflows)
Mitigation strategies for reducing harmful biological capability uplift in frontier models (e.g. pretraining data filtering, jailbreak mitigations)
Translating AI capability evaluations into real-world risk estimates
Evaluation of biological AI models (BAIMs) and their integration with LLM agents
AI governance and standards: how technical evaluation outputs inform policy and audit frameworks
-
SecureBio is looking for fellows who bring a combination of technical depth and biosecurity motivation. Ideal candidates will have some of the following:
Background in biology, virology, or a related life science
Experience with AI/ML, including familiarity with LLM evaluation or agentic systems
Interest in biosecurity, AI safety, or dual-use research governance
Self-motivated individuals who can proactively identify and solve problems in a rapidly evolving field
Strong written communication skills for research outputs intended for technical and policy audiences
Fellows who bring their own well-scoped research question are welcome.