An artificial intelligence (AI) made to assist drug makers discover brand new medications to manage ailments created 40,000 new potential chemical weapons in only six hours.
Analysts along with the North Carolina-based startup Collaboration Pharmaceuticals Inc. claim they have computational proof that AI technologies designed for drug discovery could be “misused for de novo design of biochemical weapons.”
In a study published in the journal Nature Machine Intelligence, the corporation describes how a “thought exercise” turned into a “wake up call” for the “AI in drug discovery community.”
Collaboration Pharmaceuticals Inc. has a commercial machine learning model, called MegaSyn, which is trained to identify potential drug candidates by filtering out compounds that would be toxic for human beings. Scientists wanted to know what would happen if the logic of the AI’s algorithm was reversed — what would it do if it were trained to find toxic compounds rather than eliminate them.
Making use of an open-source database, scientists instructed their AI to look for molecules with similar chemical properties to the nerve agent VX, some of the most dangerous chemical weapons developed in the 20th century.
According to reporting from The Blaze:
VX is a tasteless and odorless chemical that attacks the body’s nervous system, paralyzing muscles and preventing a person exposed to the agent from breathing. The extremely toxic compound was used to assassinate Kim Jong-nam, the half-brother of North Korean dictator Kim Jong-un.
In less than six hours after it was turned on, the AI not only generated a copy of VX, but also modeled 40,000 molecules that were either known chemical warfare agents or could potentially be turned into new chemical weapons. Some were predicted to be even more toxic than known chemical warfare agents.
“By inverting the use of our machine learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules,” the paper’s authors wrote.
“Our toxicity models were originally created for use in avoiding toxicity, enabling us to better virtually screen molecules (for pharmaceutical and consumer product applications) before ultimately confirming their toxicity through in vitro testing. The inverse, however, has always been true: the better we can predict toxicity, the better we can steer our generative model to design new molecules in a region of chemical space populated by predominantly lethal molecules.”
Possibly the most frightening conclusion of the researchers is that their research would be easy to replicate.
Fabio Urbina, a senior scientist at Collaboration Pharmaceuticals and the paper’s lead author, told The Verge in an interview that anyone with a background in chemistry and internet access could replicate their work.
“If you were to Google generative models, you could find a number of put-together one-liner generative models that people have released for free. And then, if you were to search for toxicity datasets, there’s a large number of open-source tox datasets. So if you just combine those two things, and then you know how to code and build machine learning models — all that requires really is an internet connection and a computer — then, you could easily replicate what we did. And not just for VX, but for pretty much whatever other open-source toxicity datasets exist,” Urbina said.
“Of course, it does require some expertise. If somebody were to put this together without knowing anything about chemistry, they would ultimately probably generate stuff that was not very useful. And there’s still the next step of having to get those molecules synthesized. Finding a potential drug or potential new toxic molecule is one thing; the next step of synthesis — actually creating a new molecule in the real world — would be another barrier.”
Importantly, not every particle the artificial intelligence identifies as a chemical weapons candidate will actually work if it were manufactured. Some will definitely be false positives — similar to how brand new drug prospects determined due to the artificial intelligence don’t always lead to medicines that work.
Still, the AI technology clearly has dangerous implications, so much so that the scientists were even hesitant to publish their findings, in case someone were to use their work for evil.
“The dataset they used on the AI could be downloaded for free and they worry that all it takes is some coding knowledge to turn a good AI into a chemical weapon-making machine,” Urbina explained.
“At the end of the day, we decided that we kind of want to get ahead of this. Because if it’s possible for us to do it, it’s likely that some adversarial agent somewhere is maybe already thinking about it or in the future is going to think about it.”
The paper also recommends several precautions that drug researchers using AI technology should take to prevent their work from falling into the wrong hands. Among their recommendations is a reporting structure or hotline to authorities should researchers become aware of someone developing toxic molecules for non-therapeutic uses.
“We hope that by raising awareness of this technology, we will have gone some way toward demonstrating that although AI can have important applications in healthcare and other industries, we should also remain diligent against the potential for dual use, in the same way that we would with physical resources such as molecules or biologics,” the paper concludes.