Los Alamos and OpenAI collaborate to lay the groundwork for safeguards
July 2, 2025
What could go wrong if a graduate student asks an AI chatbot how to conduct a complex experiment or a malicious actor seeks to create disease threats? The result could be dangerous in either scenario, regardless of the person's intent.
For the past year, Los Alamos National Laboratory scientists have partnered with OpenAI, the company that created ChatGPT, to investigate how bioscience researchers can use AI systems safely.
Why this matters: AI-driven advances in biology are threatened by the high potential for an online chatbot to deliver problematic information. This study is a step toward understanding the human-AI interface, which will help companies and institutions develop AI systems that are both useful for the development of advances in biology and medicine while minimizing the risk of harm.
What they did: Rather than assessing what chatbots know, Los Alamos scientists studied how real people use AI to accomplish tasks. In a real biological laboratory, nonexperts were guided by AI to learn how to genetically engineer E. coli bacteria to produce and purify insulin.
- The findings show ways AI can uplift the skills of novice actors to execute complex biological experiments for both beneficial and potentially malicious use.
- For the collaboration, OpenAI provided AI models and insights into their functionality, while the Lab contributed technical expertise and experimental design capabilities.
The big picture: OpenAI’s engagement with national labs could transform how AI systems guard against chemical, biological, radiological and nuclear risks.
Funding: Laboratory Directed Research and Development program at Los Alamos
Ethics: This study was approved by the Lab’s Human Subjects Research Review Board. Informed consent was obtained from all participants.
LA-UR-25-26070