So they’re a big tent movement, meaning theres a variety of people all with the same goal of regulating AI, although some of the messaging is geared toward believing a large catastrophe is possible.
On the supposed alignment between AI doomers and accelerationists - their goals and messaging are exactly the opposite! Not sure how believing in AI extinction helps the AI industry - in what other industry would you claim it’s going to kill everyone as a marketing strategy? I don’t see fossil fuel companies doing this.
In general, I think the people who want to regulate AI have a lot of common goals and ideas. Purity testing about what’s the worst risk of AI helps nobody. For instance, one law I think a lot of people could get behind, regardless of whether you believe in terminator or not, is liability for AI companies, where they are directly fined for harms that their models cause. This could encompass environmental, job loss, etc.
Well a majority of Americans support more regulation of AI, and support a ban on AI smarter than humans. Politicians do need voters to get reelected in the US.
There’s also variety of laws that can be passed, some that don’t directly threaten as much of AI progress, which the moneyed interests might be less hostile to, such as liability for AI companies, evaluations of social/environmental impact of ai, pauses on certain kinds of development, etc. It doesn’t have to be all or nothing, and there’s wide support among constituents for doing something.
On the question of what different countries will do, China and the EU already have more AI regulation in place than the United States, imposed unilaterally. With an international treaty to regulate ai, by the definition of a treaty all parties are bound by it so no party gets to advance ahead of the others.