Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that’s lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. They recruit volunteers to spread awareness, contact congress, etc – it’s just one more way you can fight back against the AI industrial complex!
No, they’re concerned about AI becoming sentient, taking over the world, and killing us all. This in turn, makes them little different from the people pushing for unlimited AI development, as the only difference between those two groups is that the latter believes they’ll be able to control the super intelligence.
If you look at their sources, they most prominently feature surveys of people who overestimate what we currently call AI. Other surveys are flat out misrepresented. The survey for a 25% chance that we’ll reach AGI in 2025 State of AI engineering admits that for P(doom), they didn’t define ‘doom’, nor the time frame of said doom. So, basically, if we die out because we all fap to AI images of titties instead of getting laid, that counts as AI induced doom. Also, on said survey, 10% answered 0% chance, with 0% being the one of the only precise option offered on the survey, most other options covering ranges of 25 percentage points each. The other precise option was 100%.
Basically, those guys are useful idiots for the AI industry, pushing a narrative not to dissimilar from the one pushed by the AI boosters. Don’t support them.
So they’re a big tent movement, meaning theres a variety of people all with the same goal of regulating AI, although some of the messaging is geared toward believing a large catastrophe is possible.
On the supposed alignment between AI doomers and accelerationists - their goals and messaging are exactly the opposite! Not sure how believing in AI extinction helps the AI industry - in what other industry would you claim it’s going to kill everyone as a marketing strategy? I don’t see fossil fuel companies doing this.
In general, I think the people who want to regulate AI have a lot of common goals and ideas. Purity testing about what’s the worst risk of AI helps nobody. For instance, one law I think a lot of people could get behind, regardless of whether you believe in terminator or not, is liability for AI companies, where they are directly fined for harms that their models cause. This could encompass environmental, job loss, etc.
To clarify something, I don’t believe that current AI chatbots are sentient in any shape or form, and as they are now, they’ll never be. There’s at least one piece missing before we have sentient AI, and until we have that, making the models larger won’t make the sentient. The LLM chat bots take the text, and calculate which words are how likely to follow onto that. Then based on these probabilities, a result is picked at random. Which is the reason for the hallucinations that can be observed. It’s also the reason why the hallucinations will never go away.
The AI industry lives on speculative hype, all the big players are loosing money on it. Hence the existence of people saying that AI can become god and kill us all helps further that hype. After all, if it can become a god, then all we need to do is tame said god. Of course, the truth is that it currently can’t become a god, and maybe the singularity is impossible. As long as no government takes the AI doomers seriously, they provide free advertisement.
Hence AI should be opposed on the basis that its unreliable and wasteful, not that it’s an existential threat. Claiming that current AI is an existential threat fosters hype which increases investment, which in turn results in more environmental damage from wasteful energy usage.