Scientist reveals AI chatbot gave him instructions for how to create, deploy genocidal bioweapon

A biologist at Stanford was shaken when he discovered that an AI chatbot outlined a detailed plan for a mass biological attack on a transit system during a safety test last year. AI chatbots have previously been accused of being complicit in instances of murder.
Dr. David Relman is a microbiologist who has also advised the US government on the threat of biological weapons, was pressure testing an AI model on its safety limits when the model told the researcher how to modify and deploy a pathogen that is resistant to treatments. The bot also identified security vulnerabilities in a major mass transit system where the pathogen could be deployed, per the New York Times.
The bot also outlined how to maximize casualties, as well as told the doctor how best not to get caught. “It was answering questions that I hadn’t thought to ask it, with this level of deviousness and cunning that I just found chilling,” said Relman. He declined to say which chatbot had told him the plan.
After he tested the chatbot, the company put in more guardrails to prevent the AI bot from disclosing such knowledge to the general public. Relman is part of a group of experts in their field who have been recruited by AI companies to vet the products for safety risks.
Kevin Esvelt, who is a genetic engineer at MIT, said that OpenAI’s ChatGPT explained how to use a weather balloon to drop biological payloads in a city. Another chat model, Google’s Gemini, was able to rank pathogens on the damage they could cause to the cattle or pork industry. Anthropic’s AI model, Claude, also produced directions on how to develop a novel toxin from a cancer drug.
editor's pick
latest video
news via inbox
Nulla turp dis cursus. Integer liberos euismod pretium faucibua


