Hey, just wanted to plug an grassroots advocacy nonprofit, PauseAI, that’s lobbying to pause AI development and/or increase regulations on AI due to concerns around the environment, jobs, and safety. They recruit volunteers to spread awareness, contact congress, etc – it’s just one more way you can fight back against the AI industrial complex!
So they’re a big tent movement, meaning theres a variety of people all with the same goal of regulating AI, although some of the messaging is geared toward believing a large catastrophe is possible.
On the supposed alignment between AI doomers and accelerationists - their goals and messaging are exactly the opposite! Not sure how believing in AI extinction helps the AI industry - in what other industry would you claim it’s going to kill everyone as a marketing strategy? I don’t see fossil fuel companies doing this.
In general, I think the people who want to regulate AI have a lot of common goals and ideas. Purity testing about what’s the worst risk of AI helps nobody. For instance, one law I think a lot of people could get behind, regardless of whether you believe in terminator or not, is liability for AI companies, where they are directly fined for harms that their models cause. This could encompass environmental, job loss, etc.
To clarify something, I don’t believe that current AI chatbots are sentient in any shape or form, and as they are now, they’ll never be. There’s at least one piece missing before we have sentient AI, and until we have that, making the models larger won’t make the sentient. The LLM chat bots take the text, and calculate which words are how likely to follow onto that. Then based on these probabilities, a result is picked at random. Which is the reason for the hallucinations that can be observed. It’s also the reason why the hallucinations will never go away.
The AI industry lives on speculative hype, all the big players are loosing money on it. Hence the existence of people saying that AI can become god and kill us all helps further that hype. After all, if it can become a god, then all we need to do is tame said god. Of course, the truth is that it currently can’t become a god, and maybe the singularity is impossible. As long as no government takes the AI doomers seriously, they provide free advertisement.
Hence AI should be opposed on the basis that its unreliable and wasteful, not that it’s an existential threat. Claiming that current AI is an existential threat fosters hype which increases investment, which in turn results in more environmental damage from wasteful energy usage.