Sean McGregor
Co-Founder, AVERI; Founder, AI Incident Database
-
Sean McGregor is a machine learning safety researcher and cofounder of AVERI, whose efforts have included launching the AI Incident Database, starting up the Digital Safety Research Institute at the UL Research Institutes, and training edge neural network models for the neural accelerator startup Syntiant.
Sean's open source development work has earned media attention in Time, the Atlantic, Der Spiegel, and Wired, among others, while his technical publications have appeared in a variety of machine learning, human-computer interaction, ethics, and application-centered proceedings.
Dr. McGregor currently serves as executive director for the AI Incident Database, lead of the ML Commons Agentic Workstream, and co-founder of AVERI. These efforts thematically align with an interest in "AI risk" and how we might understand those risks by building the capacity to insure them.
-
Incident Databasing. While the AI Incident Database is the leading database of harms produced by AI systems in the world, its breadth and depth of coverage is not as expansive as AI deployments have come to be. Dr. McGregor is prepared to mentor on the following topics to scale the breadth and depth of incident coverage.
1. First-Party Reporting. The AI Incident Database accepts first-party reports (i.e., from involved or harmed people) of events, but it has not made a deliberate, resourced effort to collect and improve these records.
2. Public Health Declarations. Two Arcadia Impact cohorts have developed a process for making "public health"-like declarations about AI risks in the real world (e.g., whether a harm event is endemic or emerging). The practice has not been scaled beyond the few test incident types developed by the researchers. The next step in this impact area is scaling up the practice of risk declaration by recruiting and pipelining subject matter experts.
3. Federation Development. Many interest-specific databases exist and are in development for collecting AI harm events (i.e., incidents) within specific contexts. This patchwork of projects could be better networked and integrated to enhance the rigor and coverage of incident methodologies.
4. Incident Toolkit. The AI Incident Database was developed largely before the LLM era. Many tools providing greater breadth and depth of coverage could be developed and placed into scalable serverless infrastructure.
Auditing. The AI Verification and Evaluation Research Institute (AVERI) launched to make auditing of frontier model companies effective and universal. One of the many problems making audits less effective and less common is the absence of an "audit go bag" that would enable subject matter experts (e.g., people who know biorisk) to enter a frontier model company's privileged environment and effectively interrogate the safety of systems and organizational controls.
Other thematically aligned topics that do not exactly conform to these will also be considered.
-
Ideal candidates would have:
Strong data science experience. Engineering experience a plus.
Familiarity with AI governance or adjacent policy/technical fields.
A PhD (or equivalent experience) is preferred (but not essential)