The thing I hate the most about AI and it’s ease of access; the slow, painful death of the hacker soul—brought not by war or scarcity, but by convenience. By buttons. By bots. […]

There was once magic here. There was once madness.

Kids would stay up all night on IRC with bloodshot eyes, trying to render a cube in OpenGL without segfaulting their future. They cared. They would install Gentoo on a toaster just to see if it’d boot. They knew the smell of burnt voltage regulators and the exact line of assembly where Doom hit 10 FPS on their calculator. These were artists. They wrote code like jazz musicians—full of rage, precision, and divine chaos.

Now? We’re building a world where that curiosity gets lobotomized at the door. Some poor bastard—born to be great—is going to get told to “review this AI-generated patchset” for eight hours a day, until all that wonder calcifies into apathy. The terminal will become a spreadsheet. The debugger a coffin.

Unusually well-written piece on the threat AI poses to programming as an art form.

  • jyl@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    7
    ·
    2 days ago

    Unlike vibe coding, asking an LLM how to access some specific thing in a library when you’re not even sure what to look for is a legitimate use case.

    • IsoKiero@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      You’re not wrong, but my personal experience is that it can also lead you down in a pretty convincing but totally wrong direction. I’m not a professional coder, but have at least some experience and I’ve tried the LLM approach on trying to figure out which library/command set/whatever I should use for problem at hand. Sometimes it gives useful answers, sometimes it’s totally wrong which is easy to spot and at worst it gives you something which (at least to me) seems like it could work. And on the last case I then spend more or less time figuring out how to use the thing it proposed, fail, eventually read the actual old fashioned documentation and notice that the proposed solution is somewhat related to my problem but totally wrong.

      And on that point I would have actually saved time if I did things the old fashion way (which is getting more and more annoying as search engines get worse and worse). There’s legitimate use cases too of course, but you really need to have at least some idea on what you’re doing to evaluate the answers LLMs give you.

      • jyl@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        2 days ago

        Yeah, I guess that can happen. For me, it has saved much more time than it has wasted, but I’ve only used it on relatively popular libraries with stable apis, and don’t ask for complex things.

    • dustyData@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Until it gives you a list of books and two thirds don’t exist and the rest aren’t even in the library.

      • jyl@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 day ago

        The worst I’ve got so far hasn’t been hallucinated “books”, but stuff like functions from a previous major version of the api mixed in.

        I’m most of the time on the opposite side of the AI arguments, but I don’t think it’s unreasonable to use an LLM as a documentation search engine. The article itself also points out copilot’s usefulness for similar things, but seems the opinion lost the popular vote here.

    • Kyrgizion@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      edit-2
      2 days ago

      I’ve had great success with using ChatGPT to diagnose and solve hardware issues. There’s plenty of legitimate use cases. The problem remains that if you ask it for information about something, the only way to be sure it’s correct is to actually know what you’re asking about. Anyone without at least passing knowledge of the subject will assume the info they get is correct, which will be the case most of the time, but not always. And in fields like security or medicine, such a small issue could easily have dire ramifications.

      • jyl@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        1 day ago

        If you don’t know what the code does, you’re vibe coding. The point is to not waste time searching. Obviously you’re supposed to check the docs yourself, but that’s much less tedious and time consuming than finding it, if the docs are hard to navigate.