

also if you could somehow not be into fascism, not have opinions about age-of-consent, not be a sex pest, not be into eugenics/phrenology while you build a browser, that would be great.


also if you could somehow not be into fascism, not have opinions about age-of-consent, not be a sex pest, not be into eugenics/phrenology while you build a browser, that would be great.


One thing I’ve heard repeated about OpenAI is that “the engineers don’t even know how it works!” and I’m wondering what the rebuttal to that point is.
While it is possible to write near-incomprehensible code and make an extremely complex environment, there is no reason to think there is absolutely no way to derive a theory of operation especially since any part of the whole runs on deterministic machines. And yet I’ve heard this repeated at least twice (one was on the Panic World pod, the other QAA).
I would believe that it’s possible to build a system so complex and with so little documentation that on its surface is incomprehensible but the context in which the claim is made is not that of technical incompetence, rather the claim is often hung as bait to draw one towards thinking that maybe we could bootstrap consciousness.
It seems like magical thinking to me, and a way of saying one or both of “we didn’t write shit down and therefore have no idea how the functionality works” and “we do not practically have a way to determine how a specific output was arrived at from any given prompt.” The first might be in part or on a whole unlikely as the system would need to be comprehensible enough so that new features could get added and thus engineers would have to grok things enough to do that. The second is a side effect of not being able to observe all actual input at the time a prompt was made (eg training data, user context, system context could all be viewed as implicit inputs to a function whose output is, say, 2 seconds of Coke Ad slop).
Anybody else have thoughts on countering the magic “the engineers don’t know how it works!”?


someone is definitely going to wind up shagging that child’s toy.


What if we turned a markov chain containing two decades of internet fan fiction into an oracle. Just spitballing here…


is one of the source texts for the markov chain text generator our old favorite Harry Potter fan-fic?


“what if computation was more wrong but on the other hand was faster and used less power”?


What’s the elevator pitch for Extropic again? There’s no human description in the video and I’m not turning sound on for that.


Honestly, this sort of quote from Goertzel is the kind of thing I would expect as the satirized musings from a pompous character in the Illuminatus Trilogy.


“I’m sort of a complex chaotic systems guy, so I have a low estimate that I actually know what the nonlinear dynamic in the memosphere really was,”
I say right before inhaling deeply from a bag in which I have dispensed a hefty amount of spray paint.


WTF I was doing all this in EMACS in 2008.


Moar like power the butt.


Is that why the robot is wrapped in a giant cum-sock?


That WSJ review is something special and I didn’t have sound on or CC so I’m sure there’s some weapons-grade stupid going on in dialog that I am missing. I stopped watching about when they put up a picture of Allen Turing (AI Pioneer!) and a picture of the “AI Pilot” who’s first name is Turing and then highlighted that both have the word “Turing” in their names.
Also that overgrown Roomba with hip dysplasia took 5 minutes to put two glasses in a dishwasher poorly.


Oh God my brain is so used to turning typos into likely intended words that I missed “free-sprinted”, which I’m going to guess in this context involves being athletic and horny and bottomless and possibly suffering from protein-powder-induced lead poisoning.
That might explain why copilot is a cum sprite


What professional athlete is a) working for OpenAI and b) wants to turn Sora into the bottomless fountain of goon?


I once had someone tell me to my face that comments were a code smell.


Oh, so Call of Duty


This will be easy thanks to the “Benevolence of the Rocket” equation as seen on Trashfuture.


“remember 1-900 numbers? They’re back! In AI form!”
Also I browsed other items on the site the phone came from and holy shit I have never seen a more cursed collection of products draped in Christmas shit.
the mention in QAA came during that episode and I think there it was more illustrative about how a person can progress to conspiratorial thinking about AI. The mention in Panic World was from an interview with Ed Zitron’s biggest fan, Casey Newton if I recall correctly.