News

News

News

March 8, 2024

March 8, 2024

March 8, 2024

Top scientists sign agreement to prevent misuse of AI bioweapons

Top scientists sign agreement to prevent misuse of AI bioweapons

Top scientists sign agreement to prevent misuse of AI bioweapons

An agreement by more than 90 scientists says that artificial intelligence’s benefit to the field of biology would exceed any potential harm, as they advocate for an openly collaborative scientific environment to explore AI technologies further

Cade Metz reports today for The New York Times on an agreement by more than 90 scientists to prevent the use of AI bioweapons that could endanger the public. Last year, Dario Amodei, the chief executive of leading AI startup Anthropic, warned Congress about the potential for new AI technologies to enable individuals with malicious intent but no specialized skills to orchestrate significant biological threats, such as dispersing viruses or toxins that could cause extensive harm and fatalities. This alarming statement sparked concern among senators across the political spectrum and ignited a debate among AI researchers in the private sector and academia regarding the risk severity.

In response to these concerns, over 90 biologists and scientists with expertise in AI-assisted protein design, essential for biological innovation, have now pledged their commitment to advancing their research responsibly and avoid endangering the public. Representatives from laboratories worldwide, among who Nobel Prize winner Frances Arnold, have collectively asserted that the advantages of current AI technology in protein design—including the development of new vaccines and medications—significantly outweighs the risks. Their joint statement emphasizes their intention to ensure their work continues to benefit society at large without leading to adverse outcomes.

The group is not advocating for the restriction of AI technology itself but rather for stricter control over the tools required to create new genetic materials. David Baker, who heads the Institute for Protein Design at the University of Washington and played a key role in forming this agreement, highlighted the necessity of regulating DNA manufacturing equipment. According to Baker, while protein design marks the initial phase, the actual synthesis of DNA to bring these designs into reality is where regulatory efforts should be concentrated. This initiative is part of a broader endeavor to balance the potential risks and benefits of AI, as the technology's capabilities to spread misinformation, displace jobs rapidly, and pose existential threats are scrutinized by tech firms, academic institutions, regulators, and legislators.

At the congressional hearing, Dr. Amodei of Anthropic discussed how advances in large language models (LLMs), like the technology powering online chatbots, could potentially facilitate the creation of new biological weapons, although he admitted this was not yet feasible. Anthropic's research indicated that LLMs currently offer only a marginal advantage over standard internet search engines for those seeking to develop or acquire biological weapons. However, Dr. Amodei expressed concerns that further improvements in LLMs could introduce significant risks within a few years.

OpenAI, the company behind the ChatGPT chatbot, has subsequently conducted a study with similar findings, concluding that LLMs do not currently pose a greater risk than search engines in this context. Aleksander Madry, professor of computer science at MIT and head of preparedness at OpenAI, stressed that, despite ongoing improvements to these systems, there is no evidence yet to suggest they will enable the creation of new bioweapons. Given that LLMs generate outputs based on vast amounts of existing online data, their capabilities are limited to reshaping or combining available information, including on biological warfare.

To accelerate the development of novel medical treatments and vaccines, researchers are exploring AI systems capable of generating innovative protein designs. Although such advancements could theoretically assist in the development of biological weapons, experts like Andrew White, co-founder of nonprofit Future House and one of the agreement signatories, argue that constructing actual weapons would require extensive resources and infrastructure, minimizing the direct risk posed by AI. However, the scientists behind the agreement have advocated for security protocols to prevent DNA manufacturing tools from being misused and for pre-release reviews of new AI models to ensure safety and security, advocating for an open and collaborative scientific environment to explore and enhance these technologies further.




Credits

Cade Metz—writer on artificial intelligence, driverless cars, robotics, virtual reality, and other emerging areas of technology—initially wrote and reported this story from San Francisco for The New York Times on March 8, 2024, under the title → "Dozens of Top Scientists Sign Effort to Prevent A.I. Bioweapons."

Photo: An artist’s illustration of artificial intelligence (AI). This image represents the boundaries set in place to secure safe, accountable biotechnology. It was created by Khyati Trehan. Photo © Google DeepMind.