

most kids today are technologically illiterate. We didn’t call anyone who watched a ton of tv a tech-wiz, because tv was just a device made for consumption of content. Even though the tv uses electricity to work
most kids today are technologically illiterate. We didn’t call anyone who watched a ton of tv a tech-wiz, because tv was just a device made for consumption of content. Even though the tv uses electricity to work
well the recent explosion of barbershops all around my country tell a different story
an ai is not a script. You can know what a script does. neural networks don’t work that way. You train them, and hope you picked the right dataset for it to hopefully learn what you want it to learn. You can’t test it. You can know that it works sometimes but you also know that it will also not work sometimes and there’sjacksjit you can do about it. A couple of gigabytes of floating point numbers is not decipherable to anyone.
enjoying it is a different issue. You probably enjoy it because it’s more difficult, which is perfectly valid reasoning
so? someone invented current llms too. Nothing like them existed before either. If they vibe coded with them they’d still be producing slop.
Coding an llm is very very easy. What’s not easy is having all the data, hardware and cash to train it.
i like the analogy
given that expert systems are pretty much just a big ball of if-then statements, then he might be considered to have written the app. Just with way more extra steps.
exactly, you can only really verify the code if you were capable of writing it in the first place.
And it’s an old well known fact that reading code is much harder than writing it.
the “target” is to get useful software out. The ai is the tool. In this example, the ai is the gun. It is the tool used to achieve the goal.
Anyone can make an improvised hammer. Stick a rock or a piece of metal on a stick. But that doesn’t make them carpenters, even though they made their own tools.
and he stil wouldn’t understand its output. Because as we clearly see, he doesn’t even try to look at it.
So? Some of the people pushing out ai slop would be perfectly capable of writing their own llm out of widely available free tools. Contrary to popular belief, they are not complex pieces of software, just extremely data hungry. Does not mean they magically understand the code output by the llm when it spits out something.
yes. Because that would still mean they didn’t code the app.
“killing is bad!” “but what if the murderer 3d printed his own gun?”
if they build software using mainly ai generated code, then they are a bad coder
marked as duplicate, see <other question from 2005, before LLMs were invented>
also, you can make computers much more cheap and reliable, more maintainable and much much faster, if you protect them from space radiation by operating them down here, under the protection of earth’s atmosphere.
very stupid. One of the most difficult things in space is cooling stuff. Sending up a bunch of space heaters in a box (almost all of the energy pumped into a computer is turned into heat. The actual computatiion takes next to nothing in comparison) is definitely not a good idea. Definitely not one thought up by a technical person.
there’s lots of handwavy things. They even said they “discovered new physics” to explain some disappointing (perfectly predictable, by other established nuclear physicists) results. “Oh the physics equation was wrong, we just need to make everything 25% bigger”.
Also, the emitted radiation levels will be insane once it’s scaled up
the ones that give up on all the “desirable” aspects of cryptocurrencies? The payment isn’t final until it is on chain. You don’t need to trust anyone to figure out if you’re on the right chain in the first place. Off-chain shit defeats this. What’s the point, other than dressing up the horse before beating it some more?
for example? Because your statement is similar to asking a chef what’s on the menu and them replying with “food”
a plane. A flying car is called a plane.