That's right. It is not hard to imagine similarly disastrous GPT/AI "plug-ins" with access to purchasing, manufacturing, robotics, bioengineering, genetic manipulation resources, etc. The only way forward for humanity is self-restraint through regulation. Which of course gives no guarantee that the cat will be let out of the bag (edit: or earlier events such as nuclear war or climate catastrophe will kill us off sooner)
Why not regulate the genetic manipulation and bioengineering? It seems almost irrelevant whether it's an AI who's doing the work, since the physical risks would generally exist regardless of whether a human or AI is conducting the research. And in fact, in some contexts, you could even make the argument that it's safer in the hands of an AI (e.g., I'd rather Gain of Function research be performed by robotic AI on an asteroid rather than in a lab in Wuhan run by employees who are vulnerable to human error).
We can't regulate specific things fast enough. It takes years of political infighting (this is intentional! government and democracy are supposed to move slowly so as to break things slowly) to get even partial regulation. Meanwhile every day brings another AI feature that could irreversibly bring about the end of humanity or society or democracy or ...
It's obviously false. Nuclear weapon proliferation has been largely prevented, for example. Many dangerous pathogens and lots of other things are not available to the public.
Asserting inevitability is an old rhetorical technique; it's purposes are obvious. What I wonder is, why are you using it? It serves people who want this power and have something to gain, the people who control it. Why are you fighting their battle for them?
Nuclear materials have fundamental material chokepoints that make them far easier to control.
- Most countries have little to no uranium deposits and so have to be able to find a uranium-producing ally willing to play ball.
- Production of enriched fuel and R&D are both outrageously expensive, generally limiting them to state actors.
- Enrichment has massive energy requirements and requires huge facilities, tipping off observers of what you're doing
Despite all this and decades of strong anti-nuclear proliferation international agreements India, Pakistan, South Africa, Isreal, and North Korea have all developed nuclear weapons in defiance of the UN and international law.
In comparison the only real bottleneck in proliferation of AI is computing power - but the cost of running an LLM is a pittance compared to a nuclear weapons program. OpenAI has raised something like $11 billion in funding. A single new proposed US Department of Energy uranium enrichment plant is estimated to cost $10 billion just to build.
I don't believe proliferation is inevitable but it's very possible that the genie is out of the bottle. You would have to convince the entire world that the risks are large enough to to warrant putting on the brakes, and the dangers of AI are much harder to explain than the dangers of nuclear weapons. And if rival countries cannot agree on regulation then we're just going to see a new arms race.
You can’t make a nuclear weapon with an internet connection and a GPU. Rather than imply some secondary motive on my part, put a modicum of critical thinking into what makes a nuke different than an ML model.