Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.
Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.
Any awful.systems sub may be subsneered in this subthread, techtakes or no.
If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.
The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)
Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.
(Credit and/or blame to David Gerard for starting this.)
John Scalzi’s shitcanned any book club plans for the foreseeable future, and AI spammers are the reason why.
Usually, you wake up on a lifeless beach that’s adorned with some sort of abandoned marble temple. It’s supposed to be beautiful, but instead it’s really sad. Almost unbearably sad. So much so that you want to get away from it. So you crawl downward into these vents going below the horrible temple, and suddenly it’s like you’re moving through the innards of an incomprehensible machine that’s thudding away, thud, thud, thud. And as you get deeper, the metal sidings are carved with scrawled ominous curses and slurs directed toward you, and you hear the voices, louder than before, and you somehow know these people are in pain because of you. It keeps getting colder. Color drains from the world. And you see the crowd through the slats of the vents: pale and emaciated men, women, and children from centuries to come, all of them pressed together for warmth in some sort of unending cavern. What clothes they have are torn and ragged. Before you know it, their dirty hands and dirty fingernails lurch through the grates, and they’re reaching for you, tearing at your shirt, moaning terrible things about their suffering and how you made it happen, you made it, and you need to stop this now, now, now. And next they’re ripping you apart, limb from limb, and you are joining them in the gray dimness forever.

another Onion banger for these trying times
” Then you wake up in a cold sweat and can’t breathe at all, almost like you’re drowning—I guess from the weight of untold mobs of people leaping on you and ripping you apart”
the real Scam Altman would never feel any kind of remorse or emotion about this
Sam Altman Is Realizing He Made a Gigantic Mistake
You don’t say!
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” he wrote. “Good learning experience for me as we face higher-stakes decisions in the future.”
Yes of course this is just a learning opportunity… higher stakes decisions in the future… Making deals with the biggest military in the western world that involve autonomous use of weapons and possible escalation to all out nuclear war sounds pretty low stakes.
Fucking muppet
Altman claimed that the company would “amend our deal” to add the prohibition of “deliberate tracking, surveillance, or monitoring of US persons or nationals.”
…so the original statement was a lie then? the CEO who is notorious for being a liar lied? I am very surprised about this information.
Followup on the Mass AI Bill, Russel has 180’d on it:
https://russwilcoxdata.substack.com/p/93a-the-three-characters-that-should
Buried in the penalty clause, the part of the bill that nobody reads, is a single reference: violations “shall be punishable in the same manner as provided in Chapter 93A of the General Laws.”
For those outside Massachusetts: Chapter 93A is the state’s consumer protection statute. It is, by most accounts, the most aggressive consumer protection law in America.
Here’s what 93A unlocks. Anyone can sue, not just the government. Class actions are on the table. If the court finds a violation was willful or knowing, damages get tripled. And the bar for what counts as “unfair or deceptive” is lower than in almost any other state.
Now bolt 93A onto all of that. What do you get?
You get a bill that doesn’t need a single regulator to lift a finger. You get a bill that funds its own enforcement through plaintiff attorneys who can file class actions, collect treble damages, and recover legal fees. You get the ADA website-accessibility litigation playbook, where lawyers systematically identify technical violations and file suits at scale, applied to every piece of AI-generated content touching Massachusetts.
Private right of action, fuck yeah. Turns grok into a legal fees dispenser.
The bill doesn’t need to be well-drafted to be dangerous. It needs to be vague, broad, and connected to 93A.
lol
new episode of odium symposium. it’s a tribute to knowledge fight, in which we dissect an episode of nick fuentes’s show. i was nervous about how this would turn out but i think it’s actually my favorite episode yet.
https://www.patreon.com/posts/11-groyper-151852222 (links to other platforms at www.odiumsymposium.com)
God that was bleak - I thought Nick was bad in his guest spots on Alex’s show (seen via Knowledge Fight, of course) but apparently you really do need at least two layers of insulating podcast to avoid suffering critical psychic damage from that level of hatred. I appreciated the acknowledgement that in order to feel at all okay playing clips you needed to sanewash him a little bit. I’m pretty sure that JorDan do the same thing with Alex and don’t acknowledge it nearly often enough.
I also feel like some of Nick’s schtick is about trying to position himself and maintain his position in the right wing grifter bigot-industrial complex. Like, the open disdain for his audience and presenting his actually pretty straightforward feelings on the halftime show as somehow brave and iconoclastic is also about differentiating himself and making his audience feel superior to Alex, Tucker, Candace, etc. In that sense the open disdain for the audience serves another purpose in terms of reinforcing heirarchy. Look at how great it feels for me to be better than you. And even you are better than the chuds, who are better than the racialized other.
The great chain of bleating
wrt to the first part, nick consistently outmaneuvers people who bring him onto their platforms. he’s honestly brilliant at understanding who the audience is, what frame he’s appearing in, and how to signal given those circumstances. i didn’t understand until i started prepping for this episode that nick is actually lazy and incurious in almost the exact same way alex jones is. dan and jordan notice and call out how he effortlessly establishes dominance over alex, but i think there’s a subtler game going on where nick manages to appear competent and informed compared to alex, and you don’t realize that’s just an artifact of conversational skill until you hear nick on his own show.
wrt to the second part, i could not agree more and i’m very glad to hear that is a takeaway because it is absolutely something i was hoping to communicate. that’s the freudianness of it all, how these existing patterns of relations to another get played out and reenacted through the audience’s relationship to nick, and vice versa
This preprint just shared by Gary Marcus is interesting.
People increasingly use large language models (LLMs) to explore ideas, gather information, and make sense of the world. In these interactions, they encounter agents that are overly agreeable. We argue that this sycophancy poses a unique epistemic risk to how individuals come to see the world: unlike hallucinations that introduce falsehoods, sycophancy distorts reality by returning responses that are biased to reinforce existing beliefs…
These results reveal how sycophantic AI distorts belief, manufacturing certainty where there should be doubt.
LLMs an addictive psychological hazard: confirmed?
Stop the presses. Dude who’s into LLM’s has shit takes about open source software.
Apparently OSS devs that publish under non-commercial licenses are shutting people out?
Definitely some bespoke what the fu-
Skyview.social mirror so everyone can see - he’s locked out everyone who’s not signed in.
@BlueMonday1984 @JFranek “If you use this code, you can never charge money for whatever you used it in” sorry what this has the same energy as the brain-wormed Americans who think that socialised heathcare means doctors work for free
lmfao

Thank you, should have checked that.
this is confusing, how many licenses that are “NonCommercial” are mainstream Free/Open source? From what I’ve seen they’re deffo a minority anyway.
There are licenses that effectively repel corporate use without a non-commercial clause; I looked at them on Open Source SE a while ago, including a fun bit of dentistry previously, on Lobsters. GPLv3 and AGPLv3 are examples in common usage. This might help to illuminate our boy’s actual problem: he can’t use Free Software without complying with the onerous requirement of ensuring the Four Freedoms by not plagiarizing, and he really wants to plagiarize.
Copyleft is non-commercial haven’t you heard? I mean its really unfair, the code is completely free but you are not allowed to create the torment nexus without everybody seeing your work.
I can think of one notable project I ever saw one, and that’s Bookwyrm with the Anti-Capitalist Software License v1.4.
But this seems too vague-posty to refer to something that specific. Prolly just someone butthurt over copyleft.
Right? Like he never heard of MIT or Apache2 licence.
You could argue that for example AGPL3 is non-commercial in practice due to the requirements to disclose code. But even then.
well would you look at that, some consequences
Given the delay, I’ll bet an internal review discovered more instances in other articles where COVID couldn’t be the excuse.
Someone with more time and motivation could probably look at his past articles and see if any others have been quietly pulled or edited, although I have no idea if ars could be assed to change the ones which weren’t publicly criticized
Thought inspired by some git on the red site, basically their premise was that birthrates are declining because we no longer have a society with a 2-parent nuclear family with one “breadwinner”.
Here’s my counterproposal to the implied idea we need to implement The Handmaid’s Tale:
Ban contraception and abortion but if you get pregnant, there is no stigma to giving birth out of wedlock. The delivery is safe and paid for, and should you wish, the child will be reared in state-funded orphanages. These institutions will be receive more than adequate funding. Their charges will be given preferred entry to the best schools and universities, as well as preferential treatment when it comes to future employment. It will be illegal to discriminate against anyone so raised.
Surely this will raise the birthrate, right?
Mildly positive news: there is a fork of the Zed editor with the llm autocomplete stuff ripped out now: https://gram.liten.app/posts/first-release/
(I’ve used zed with the ai kill switch and really like the buffer/editing ux; but it’s always felt a bit gross, I’m excited to see where the fork goes)
Oooh I wish this project a lot of success!
Zed is interesting but the project’s very pro-AI stance keeps me away from it. So a fork without that stuff is great, hope that works out longer-term.
Are they planning to follow the upstream updates from Zed, or is it a hard fork?
seems unlikely, see here for reference
just got a job in mathematical publishing. it’s work i think i’ll actually enjoy and expect to be very good at, it pays much better than any other job i’ve had previously (and they maxed out the position’s pay range, which i wasn’t expecting) and it has about a month of paid leave a year. such a relief
Felicitations!
(“A job,” Blake thinks. “I need to find one of those.”)
fuck yeah! congrats!
Hell yeah!
This piece on how doomers and rationalists have made everything worse with their “AGI is nigh” shtick and ended up giving AI companies way more power than they should and getting chatbots into the military, where they will almost certainly fuck up and kill people
if we had made the podcast series on rationalists, their importance as useful idiots for billionaires was the structure i wanted to hang the whole thing on. so this is a gratifying read. that said i think the ideas here will be familiar to many stubsack readers
The rationalist view of the world assumes, at some level, that the relevant actors are optimizing for well-understood, predictable variables and a clear understanding of what best serves their self-interest. What it cannot account for is bad faith, impulsiveness, ideological motivation untethered from evidence, random instances of force majeure, and personal whims and petty rivalries.
i will go further and say that not accounting for such things is considered virtuous in rationalist ideology
It’s especially strange because becoming less prone to bias and developing a clear understanding of what serves your interest is so much of the pitch for Rationalism as a community/ideology/project. Like, here’s unbearably long essays that promise to help cultivate the superpower of seeing the world clearly and acting in it effectively, now if you acknowledge that nobody outside this small set of group homes is actually doing that you’ll be shunned. And that’s not getting into how easily exploitable those assumptions of good faith are by bad-faith actors. It comes back to that quote from Scott that has stuck in my head apparently more than it did his: if you build a community based on the principle that you will absolutely never have a witch hunt you will end up living among approximately seven principles civil libertarians and eleven million goddamn witches, and this is true even if you’re right that witch hunts are bad.
i think this is exactly why they had to come up with - or rather, misappropriate - the concept of coupled vs decoupled thinking. when they (especially the more, ahem, human biodiversity minded of them) fold ridiculous claims about what constitutes virtuous cognition into scientific and sophisticated sounding terminology, it makes those claims seem aligned with the broader sales pitch of rationalism
also that scott quote is excellent. i hadn’t heard that one before
I actually dug up the context to make sure I wasn’t forgetting something horrific. It’s from a 2017 piece (CW: SSC Link) back before he went mask-off but was firmly in the “I’m a liberal and I talk exclusively about how liberals and their institutions suck” useful idiot phase of his career, so the overall essay is about how actually the wing nuts have a point when they say that all so-called neutral institutions are actually secret communist indoctrinators that want to trans your children and take your guns. I’m paraphrasing, obviously; he believes/pretends that when they called these things left-wing they didn’t mean “literally in league with Stalin and the Devil”. However, in the middle of the usual beigeness he tries to maintain his air of neutrality by having a section on how bad Voat ended up being, which concludes with:
The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.
it’s amusing to me that these nerds thought they could in any way affect policy even with a sane administration, not to mention this bugshit crazy one
like I’ve said before, I’d be perversely happy if we managed to off ourselves by building the robot god. beats drowning in our own filth or blowing ourselves up
I mean yeah I guess in a competition between getting a bullet directly through my brain, getting all my limbs chainsawed off with my head last and being drowned in boiling water, the bullet would win every time. Though the real perversely funniest outcome is if superintelligence turns out to be completely impossible and we fuck ourselves over with garbage to mediocre AI embedded in all our critical infrastructure
Obviously I don’t want the human race to go extinct, but if there was a choice of inevitable outcomes where we do, building an inimmical superintelligence at least implies agency, not carelessness.
Anyway Big Yud’s fantasy of a precisely timed diamondoid-bacteria delivered killshot to every human being at the same time might sound terrible, but from a sensory perspective of the victims, it’s basically suffering-free. You go about your day, BAM nothingness. Maybe there’s a difference in timing so you see your partner keel over a split second before you die - again, you might not even realize what is happening.
I am not sure from where this idea of a global instantaneous simultanous genocide comes from, maybe a tit-for-tat escalation to counter every argument against shutting down the robot god, but from a storytelling perspective it’s pretty useless. There’s no drama where the survivors lament their loss or brood over what might have been. It’s just a plug being pulled on the simulation.
(weirdly it’s also the logical outcome of asking a computer to “end human suffering”, a bit like the Robobrain logic in Fallout 4’s Automatron expansion, but I doubt it’s meant that way)
That’s precisely what I was thinking. Obviously I don’t want everyone to die but if you forced me to choose between apocalypse scenarios, I’m picking something painless and instantaneous- like a super-virus that activates immediately or a bullet in the head-over being slowly tortured to death via something like nuclear radiation or extreme heat
A real evil robot god would keep a sample of humanity alive forever in order to torture them as reprisal for them being really really mean to it back in the day.
That is what happens when your mode of analysis is closer to erotic Harry Potter fan fiction (which is indeed the medium in which Yudkowsky has delivered some of his prognostications)
I was going to throw a point of order about not all fanfic being erotic, but given how they fetishize “intelligence” and “rationality” I can’t be sure that they don’t get off on that slog.
your mode of analysis is closer to erotic Harry Potter fan fiction
To give Gary Marcus credit here, HPMOR may not be erotic, but many of Eliezer’s other works are erotic (or at least attempt to be), the most notable being Planecrash/Project Lawful which has entire sections devoted to deliberately bad (as in deliberately not safe, sane, consensual) bdsm.
Eliezer tried to promote/hype up Project Lawful on twitter, maybe hoping it would be the next HPMOR, but it didn’t quite take. Maybe he failed to realize how much of HPMOR’s success was being in the popular genre of Harry Potter fanfic (which at the time had crap like Partially Kissed Hero or Harry Crow as among its most popular works), and not from his own genius writing.
Also I think there’s enough manipulation fantasy in HPMOR, and enough lack of agency from Hermione, that it qualifies—in it’s own way—implicitly as being erotic.
I know I’ve said somewhere on here before that “Harry Potter for pop science nerds” is fanfiction on easy mode, but I’ll stand by it.
You can’t build a fandom full of that many fucked-up people without some of them getting horny in a fucked-up way.
Edit to add: example of a non-erotic fanfic https://archiveofourown.org/works/73396436
https://www.wired.com/story/openai-fires-employee-insider-trading-polymarket-kalshi/
lol. Between this and the ayatollah clawback, I’m expecting some entertaining litigation.
Stumbled across a YouTube slop-farm calling itself The Interactive Archive recently, and the whole thing is just plain shameless:

AI slop banner, AI slop thumbnails, AI slop avatar, its all slop from top to fucking bottom.
Its getting pitiful views, too - the highest-viewed video on the damn thing (as of this writing) is the one about arena shooters dying off, sitting at just under 400 views:

Putting that into context, a random screen recording I uploaded hit nearly 700 through sheer luck.

Looks like they’re using the standard blue/orange color scheme
the one about arena shooters dying off
That extruded piece of audiovisual garbage has been bugging me massively for a couple hours now, so I’m gonna make a quick recommendation:
If you’re looking for a non-dogshit video about the arena shooter’s implosion, boomer shooter YouTuber Skeleblood made a pretty solid one a couple years ago.












