• 9 Posts
  • 1.05K Comments
Joined 2 years ago
cake
Cake day: January 16th, 2024

help-circle




  • I mean if you ever toyed around with neural networks or similar ML models you know it’s basically impossible to divine what the hell is going on inside by just looking at the weights, even if you try to plot them or visualise in other ways.

    There’s a whole branch of ML about explainable or white-box models because it turns out you need to put extra care and design the system around being explainable in the first place to be able to reason about its internals. There’s no evidence OpenAI put any effort towards this, instead focusing on cool-looking outputs they can shove into a presser.

    In other words, “engineers don’t know how it works” can have two meanings - that they’re hitting computers with wrenches hoping for the best with no rhyme or reason; or that they don’t have a good model of what makes the chatbot produce certain outputs, i.e. just by looking at the output it’s not really possible to figure out what specific training data it comes from or how to stop it from producing that output on a fundamental level. The former is demonstrably false and almost a strawman, I don’t know who believes that, a lot of people that work on OpenAI are misguided but otherwise incredibly clever programmers and ML researchers, the sheer fact that this thing hasn’t collapsed under its own weight is a great engineering feat even if externalities it produces are horrifying. The latter is, as far as I’m aware, largely true, or at least I haven’t seen any hints that would falsify that. If OpenAI satisfyingly solved the explainability problem it’d be a major achievement everyone would be talking about.










  • My completely PIDOOMA take is that if you’re self-interested and manipulative you’re already treating most if not all people as lesser, less savvy, less smart than you. So just the fact that you can half-ass shit with a bot and declare yourself an expert in everything that doesn’t need such things like “collaboration with other people”, ew, is like a shot of cocaine into your eyeball.

    LLMs’ tone is also very bootlicking, so if you’re already narcissistic and you get a tool that tells you yes, you are just the smartest boi, well… To quote a classic, it must be like being repeatedly kicked in the head by a horse.







  • I think everyone can agree on “this is a slur that we took from StarWars to be derogatory and justify our distaste and opposition to genAI”, it’s just that some people think that’s a bad thing?

    Like it appears some people think using the n-word is bad because it’s Bad™, not because there’s an actual dehumanising effect on a group of people. What’s your argument, that we’re dehumanising Grok? Ye because it’s not a human! “But if it was about the Jews it’d be bad” ye and if my grandmother had wheels she would have been a bike, what the fuck is your point?

    As for the origins I also think it is very important that the word is “clanker” from StarWars, since their droids are not sentient, whereas both “toaster” and “skinjob” are actually used as a hateful term towards sentient beings. BSG goes out of its way to drive in the fact that genociding Cylons would also be bad, actually. The sentience of “skinjobs” is like the whole point of Blade Runner.