• ExLisper@lemmy.curiana.net
    link
    fedilink
    arrow-up
    6
    ·
    2 hours ago

    It’s because all the questions have already been answered. How many times can you answer how to reverse an array in javascript?

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    6 hours ago

    To the surprise of absolutely no one. Tends to happen when you cultivate one of the most tixic online spaces on the net. I’ve never asked a question on SO, but just the verbiage used to accost people just trying to learn is just insane. Mods don’t really care about post content as long as its not perceived as “hostile,” so you can be generally as passive aggressive and shitty as you want. It’s just…weird.

    You can find especially viperis content when you find a question which has been answered, but someone is just like “Well, this isn’t the way that I do it!” etc, and then go on a tirade about how the question was asked poorly and the answer doesn’t completely answer the question.

    Shit is just wild.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 hours ago

      I use SO daily and never seen anything like you describe there. All I see is that incorrect answers are down voted. I don’t know, maybe I just don’t pay attention to the “verbiage”. I look at the code sample and move on. In the end, it’s not a forum. I’m not there to read opinions.

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    arrow-down
    7
    ·
    10 hours ago

    I gave up on it when they decided to sell my answers/questions for AI training. First I wanted to delete my account, but my data would stay. So I started editing my answers to say “fuck ai” (in a nutshell). I got suspended for a couple months to think about what I did. So I dag deep into my consciousness and came up with a better plan. I went through my answers (and questions) and poisoned them little by little every day with errors. After that I haven’t visited that crap network anymore. Before all this I was there all the time, had lots of karma (or whatever it was called there). Couldn’t care less after the AI crap. I honestly hope, that I helped make the AI, that was and probably still is trained on data that the users didn’t consent to be sold, little bit more shitty.

    • Rayquetzalcoatl@lemmy.world
      link
      fedilink
      English
      arrow-up
      33
      arrow-down
      1
      ·
      edit-2
      9 hours ago

      Yeah the AI without consent thing killed it for me, too. Shame we couldn’t totally tank the whole site with poisoned answers.

      While I find the site so helpful, humans that help AI like the team at StackOverflow did deserve to be on the losing end.

      I am absolutely not above cutting off my nose to spite my face.

    • talkingpumpkin@lemmy.world
      link
      fedilink
      arrow-up
      10
      arrow-down
      1
      ·
      9 hours ago

      I went through my answers (and questions) and poisoned them little by little every day with errors

      You are an evil genius (also, a very determined one - I wouldn’t have had the patience).

  • PoisonedPrisonPanda@discuss.tchncs.de
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    10 hours ago

    To be honest. (although I am guilty using chatgpt way too often) I have never not found a question + an answer to a similar problem on stackoverflow.

    The realm is saturated. 90 % of the common questions are answered. Complex problems which are not yet asked and answered are probably too difficult to formulate on stackoverflow.

    It should be kept at what it is. An enormous repository of knowledge.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      25
      ·
      edit-2
      10 hours ago

      This is a huge reason for the question decline! All the easy stuff has been answered, the knowledge is already there. But people are so used to infinite growth that anything contrary = death lol

      People also blame ai, but if people are going to ai to ask the common already answered questions then… good! They’d just get hurt feelings when their question was closed as a dupe

      • hallettj@leminal.space
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        Yeah, the article seems to assume AI is the cause without attempting to rule out other factors. Plus the graph shows a steady decline starting years before ChatGPT appeared.

      • PoisonedPrisonPanda@discuss.tchncs.de
        link
        fedilink
        arrow-up
        5
        ·
        9 hours ago

        People also blame ai, but if people are going to ai to ask the common already answered questions then… good!

        exactly!

        While I am indeed worried about the “wasted” energy (thats a whole other topic), thats pretty much why AI is good for.

          • Gamma@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            9 hours ago

            Stack Overflow has a whole network of Q&A sites. There’s places to post and answer puzzles, code golf, ask physics or political questions, etc. Lots of useful stuff not many people know about

            • ruk_n_rul@monyet.cc
              link
              fedilink
              arrow-up
              2
              ·
              edit-2
              6 hours ago

              Fun fact: the math “sub-stackoverflow” is owned by the American mathematical society iirc (do correct if wrong) and they reserve the right to up and leave and set root elsewhere outside the network.

  • ifGoingToCrashDont@lemmy.world
    link
    fedilink
    arrow-up
    15
    arrow-down
    4
    ·
    9 hours ago

    I think it’ll make a comeback eventually. LLMs will get progressively less useful as a replacement as its’ training data stales. Without refreshed data it’s going to be just as irrelevant as the years go on. Where will it get data about new programming languages or solutions to problems in new software? LLM knowledge will be stuck in 2025 unless new training material is given to it.

    • suoko@feddit.it
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Until someone releases an open LLM in the sense that every prompt/question is published on a forum like site

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        edit-2
        8 hours ago

        Your analogy simply does not hold here. If you’re having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it’s doing. Chess has the following:

        1. A very limited set of clearly defined, rigid rules.
        2. One single end objective: put the other king in checkmate before yours is or, if you can’t, go for a draw.
        3. Reasonable metrics for how you’re doing and an ability to reasonably predict how you’ll be doing later.

        Here’s where generative AI is different: when you’re doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier’s goal is to be correct, and the generator’s goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

        Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That’s what you’re describing for the classifier.

        • PoisonedPrisonPanda@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          arrow-down
          5
          ·
          8 hours ago

          well. indeed the devil’s in the detail.

          But going with your story. Yes, you are right in general. But the human input is already there.

          But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

          AI can already understand what stripes are, and can draw the connection that a zebra is a horse without stripes. Therefore the human input is already given. Brute force learning will do the rest. Simply because time is irrelevant and computations occur at a much faster rate.

          Therefore in the future I believe that AI will enhance itself. Because of the input it already got, which is sufficient to hone its skills.

          While I know for now we are just talking about LLMs as blackboxes which are repetitive in generating output (no creativity). But the 2nd grader also has many skills which are sufficient to enlarge its knowledge. Not requiring everything taught by a human. in this sense.

          I simply doubt this:

          LLMs will get progressively less useful

          Where will it get data about new programming languages or solutions to problems in new software?

          On the other hand you are right. AI will not understand abstractions of something beyond its realm. But this does not mean it wont expedite in stuff that it can draw conclusions from.

          And even in the case of new programming languages, I think a trained model will pick up the logic of the code - basically making use of its already learned pattern recognition skills. And probably at a faster pace than a human can understand a new programming language.

          • TheTechnician27@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            1
            ·
            edit-2
            7 hours ago

            Dude, I’m sorry, I just don’t know how else to tell you “you don’t know what you’re talking about”. I’d refer you to Chapter 20 of Goodfellow et al.'s 2016 book on Deep Learning, but 1) it tragically came out a year before transformer models, and 2) most of it will go over your head without a foundation from many previous chapters. What you’re describing – generative AI training on generative AI ad infinitum – is a death spiral. Literally the entire premise of adversarial training of generative AI is that for the classifier to get better, you need to keep funneling in real material alongside the fake material.

            You keep anthropomorphizing with “AI can already understand X”, but that betrays a fundamental misunderstanding of what a deep learning model is: it doesn’t “understand” shit about fuck; it’s an unfathomably complex nonlinear algebraic function that transforms inputs to outputs. To summarize in a word why you’re so wrong: overfitting. This is one of the first things you’ll learn about in a ML class, and it’s what happens when you let a model train on the same data over and over again forever. It’s especially bad for a classifier to be overfitted when it’s pitted against a generator, because a sufficiently complex generator will learn how to outsmart the overfitted classifier and it will find a cozy little local minimum that in reality works like dogshit but outsmarts the classifier which is its only job.

            You really, really, really just fundamentally do not understand how a machine learning model works, and that’s okay – it’s a complex tool being presented to people who have no business knowing what a Hessian matrix or a DCT is – but please understand when you’re talking about it that these are extremely advanced and complex statistical models that work on mathematics, not vibes.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      9
      ·
      9 hours ago

      I don’t mind so much. It started as a Wiki and then became a corporate AI training ground.

      I think Microsoft bought it, and as with most things they buy, they run it into the ground.