Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Executive Committee 
 Postdocs 
 Visitors 
 Students 
 Research 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 P/T Colloquia 
 Archive 
 Ulam Scholar 
 
 Postdoc Nominations 
 Student Requests 
 Student Program 
 Visitor Requests 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Monday, October 07, 2024
1:00 PM - 2:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Tradeoffs between AI Progress and Biorisk

Ezra Karger
The Federal Reserve Bank of Chicago and The Research Director of the Forecasting Research Institute

We ask experts and accurate forecasters to predict the effect of AI progress on the likelihood of catastrophic pathogen outbreaks, particularly via AI-enhanced access to bioweapons. A primary objective is to inform capability scaling policies (CSPs) for leading AI labs and policymakers, with a focus on the risk posed by AI-enabled chemical and biological weapons. Despite general concerns, few details exist on the specific capabilities that could raise biosecurity risks, the thresholds for intervention, or necessary countermeasures. To address these gaps, we design an idealized evaluation scenario comparing the risk of pathogen synthesis between two groups—one with and one without access to large language models (LLMs). We ask experts to forecast the risk of a non-natural catastrophic pathogen outbreak (causing more than 100,000 deaths or more than $1 trillion in damage) under two AI progress scenarios: (A) no major AI advancements by 2026, and (B) a 10x increase in the proportion of STEM graduates able to synthesize pathogens, driven by LLMs. Expert elicitation suggests a median increase of more than 50,000 expected deaths (or greater than $500 billion in damages) within three years in scenario B vs. scenario A. These findings highlight the need for detailed, scenario-based evaluations to inform policy design and to identify "red-line capabilities" in AI development that warrant precautionary regulation.

Bio: Ezra Karger is an economist in the microeconomics research group at the Federal Reserve Bank of Chicago and the Research Director of the Forecasting Research Institute, where he develops incentive-compatible methods for forecasting unresolvable questions, explores the limits of forecasting in low-probability domains, and conducts large-scale surveys of experts, often focused on long-run geopolitical outcomes. In his role as an economist, he also uses large datasets to construct high-frequency indices that track policy-relevant economic indicators.

Host: Sara Del Valle (A-1)