You can consider this thread an interest check. If a lot of people respond positively, I’ll go ahead with the project. If this thread collects dust, or the response is mostly negative, I’ll do something else with my time.
This idea came to me while I was reading this wretched article from Time Magazine, in which Yudkowsky is so afraid of AI chatbots that he’s willing to risk a nuclear holocaust to stop them:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
Of course, the real reason for all this fearmongering AI is twofold: to throttle third-world access to these platforms, and to make these systems seem much more capable than they actually are. But so often, the state and corporate interests pushing propaganda get high on their own supply.
The premise: What if those who shared Yudkowsky’s views got into positions where they had significant power to direct NATO policy? What if Russia, in defiance of these, began constructing an LLM cluster of unprecedented scale in Yekaterinburg in 2030? What if this led to escalating pressure from NATO that eventually pushed the world to the brink of a full-scale nuclear exchange?
The format: Each week (barring schedule slips), I will post a mock news megathread to c/writing. This megathread will contain a date and a list of relevant news links, including excerpts. The “links” won’t lead anywhere real, of course, they’ll just be something like www.fake-website.com, but using the link format will help give a sense of verisimilitude.
Your role: You, users of Hexbear, will play your lovely shitposting selves, reacting to these events as if they were actually happening. All levels of participation are welcome, as long as they are in-character and fit the tone of the story, even if it’s just a simple “Oh god we’re all going to die over a chatbot ”. “Yes, and” improv that includes faux links and/or reactions to additional news events is also welcome, provided it’s not too disruptive to the overall narrative: “terrorist firebombs chip fabrication plant in Shenzhen” or “Burkina Faso pursues closer ties with Russia to gain economic independence from France” would be fine, for example, but I would have to veto “China nukes Washington D.C.”
Narrative conventions: To keep things interesting, I will assume rough military parity between NATO and Russia (even if this requires some fudging on, for example, the efficacy of the F-35 and the US’ ability to produce it in meaningful numbers). The US will have strong means of economic coercion through the petrodollar, but Russia’s industrial capacity and third-world ties will allow it to mitigate American attempts to economically strangle it. Technology will not significantly deviate from the current day, except some weapons and technologies that are currently only available in small numbers (such as the S-500 air defense system) might have entered mass production.
If this is something you would want to participate in, please let me know so I can tag you if this gets off the ground.
AI usage would be the dumbest casus belli for nuclear war, so of course it’s one of the most plausible possibilities.
Please be careful with that lathe
This is an interesting idea I’d like to see where it ends up
Thank you!
Re: worldbuilding, why is Russia the one trying to build a superintelligence? It seems to me that it’s more plausible that China’s the one who tries to build it, given that China heavily invests in the technology and most western “AI race” literature focuses on a USA vs China AI conflict
Also I would like to be tagged
Without dipping into spoiler territory, I have plans for China that require them not to be the one building the AI cluster.
If you want an in-universe justification, you could say that the Ukraine War and its legacy have made Russia highly adversarial toward NATO, so in contrast to China who is trying to stay peaceful and wouldn’t consider an elaborate chatbot worth kicking off an international crisis for, Russia would be much more willing to do something that would directly challenge NATO’s authority.