• Anders429@programming.dev
    link
    fedilink
    arrow-up
    6
    ·
    2 hours ago

    A lot of people seem to be celebrating this, but I personally think this is a net negative for programming. Are people actually replacing SO with talking to LLMs? If not, where are they going?

    I’ve seen an uptick in people using places like discord to get help. But that’s not easily searchable and not in the same format that it is in stackoverflow. SO was meant to organize these answers to make asking questions easier. Now it seems like we’re walking away from that, and I can’t quite understand why. Is it really because SO is “toxic”?

  • chrischryse@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    3
    ·
    4 hours ago

    Good. StackOverflow is toxic, while I was in school I would ask questions that were “obvious” I guess. I’d get told that I’m dumb (didn’t get those words but it was implied) when trying to ask for clarification. Then I got banned from posting anymore questions due to downvotes. Like imo how can you learn if people shun you for asking questions?

    Reddits programming community was more welcoming and kinder than the stuck up folk on SO.

  • ExLisper@lemmy.curiana.net
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    13 hours ago

    It’s because all the questions have already been answered. How many times can you answer how to reverse an array in javascript?

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    18 hours ago

    To the surprise of absolutely no one. Tends to happen when you cultivate one of the most tixic online spaces on the net. I’ve never asked a question on SO, but just the verbiage used to accost people just trying to learn is just insane. Mods don’t really care about post content as long as its not perceived as “hostile,” so you can be generally as passive aggressive and shitty as you want. It’s just…weird.

    You can find especially viperis content when you find a question which has been answered, but someone is just like “Well, this isn’t the way that I do it!” etc, and then go on a tirade about how the question was asked poorly and the answer doesn’t completely answer the question.

    Shit is just wild.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      ·
      7 hours ago

      I asked a question on there about Apache Poi. Then no one answered it so I found a solution and answered it myself. Must’ve stayed relevant because I fielded a few questions about it for years.

      Then they took my account away, I think maybe because I didn’t confirm my identity after a big breach? Then I looked for my Q/A and it was attributed to someone else. I was hot about it for a minute and then realized I didn’t care and was finally free from being the expert in that one niche thing I’ve never done since.

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      17
      arrow-down
      1
      ·
      13 hours ago

      I use SO daily and never seen anything like you describe there. All I see is that incorrect answers are down voted. I don’t know, maybe I just don’t pay attention to the “verbiage”. I look at the code sample and move on. In the end, it’s not a forum. I’m not there to read opinions.

      • Bogus007@lemm.ee
        link
        fedilink
        arrow-up
        1
        arrow-down
        4
        ·
        9 hours ago

        I find the concept of downvoting very toxic and discouraging. It can potentially prevent people to express different views, something a discussion and our personal development is thriving on. It can be well seen on Reddit and even on Lemmy, where people with different views get sometimes heavily downvoted. It is something I consider to be close to “cancel culture” - a majority decides not to like your opinion, so it tries to silence you by voting you “out”. I would really love to see that Lemmy removes this feature and just allows to upvote - so you can upvote a comment or not, but you cannot downvote a comment.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          3
          ·
          8 hours ago
          1. You can disable download in a lemmy instance. My instance doesn’t have downvotes.

          2. What opinions do you see expressed on SO? Maybe we’re searching for different things there but all I see are answers that are either correct or not. If someone misunderstood the questions and the answers is not correct it gets downvoted. But I don’t know, maybe others use SO for things like “what’s your favorite distro?” or “Is AWS better then Azure?”.

          • Bogus007@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            2 hours ago
            1. In my opinion, you’re doing a great job by not enabling downvotes. Every user can see how many votes their comment has, which should be enough for them to gauge how well their comment is received. 👍

            2. I haven’t been on Stack Overflow for a long time (around 15 years ago). Back then, I was mostly focused on statistics and programming in R. It’s true that rude responses were rare, especially in the sense that the OP should have known the answer beforehand or could have researched it themselves before asking. But yes, I never saw personal attacks.

        • sqgl@beehaw.org
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          8 hours ago

          Am still on Reddit and was reminded just minutes ago of what you just said.

          Here is 20 people downvoting me for giving a TL;DR which is somehow both good and bad simultaneously.

          • Bogus007@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            7 hours ago

            Maybe I’m misunderstanding what you wrote on Reddit, but from what I read, there was nothing even remotely offensive. You simply provided information. Downvoting you for that is just silly.

            The downvotes you’re getting here on Lemmy for your comment are equally baseless (current status: 0). It just shows that some people have enough energy to downvote, but not enough to engage in a discussion. Maybe they should save that energy for something more constructive.

            Some newspaper forums require identity verification (through paid subscriptions, social media accounts, etc.). These forums are generally much more civil - and we all know why.

            • sqgl@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              6 hours ago

              Which Lemmy downvotes?

              I got banned temporarily (but actually permanently) from TheGuardian for posting newspaper articles reporting on evidence that Assange’s Swedish alleged rape victims boasted about their sexual encounter on Twitter and SMS (which bus lawyer submitted as evidence). No commentary from me.

              I got banned from r/worldnews without warning because I posted an article by Seymore Hirsch suggesting that the NordStream pipeline was sabotaged by USA themselves. The only commentary there was to add that he is a respectedjournalist who exposed the Mai Lai massacre in Vietnam.

              I had a comments deleted on Lemmy WorldNews because the Mod did not like my politics. At least I wasn’t banned.

              Friends have set up a private forum and server which is a solution however it doesn’t have much activity. Those in our circle are too attracted to popular socials instead it seems.

  • Kokesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    9
    ·
    21 hours ago

    I gave up on it when they decided to sell my answers/questions for AI training. First I wanted to delete my account, but my data would stay. So I started editing my answers to say “fuck ai” (in a nutshell). I got suspended for a couple months to think about what I did. So I dag deep into my consciousness and came up with a better plan. I went through my answers (and questions) and poisoned them little by little every day with errors. After that I haven’t visited that crap network anymore. Before all this I was there all the time, had lots of karma (or whatever it was called there). Couldn’t care less after the AI crap. I honestly hope, that I helped make the AI, that was and probably still is trained on data that the users didn’t consent to be sold, little bit more shitty.

    • Anders429@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      I guess the main issue here is that we let some group “own” all of the questions and answers, giving them the opportunity to sell it whenever they wanted to cash out.

      Maybe a better solution is some kind of decentralized version of StackOverflow that prevents one person from owning everything. Something like Lemmy and Mastodon, but for questions and answers specifically.

      • Kokesh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        Yes, but if all this coding ai fails more and more in delivering good results, people may use it less.

    • Rayquetzalcoatl@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      2
      ·
      edit-2
      21 hours ago

      Yeah the AI without consent thing killed it for me, too. Shame we couldn’t totally tank the whole site with poisoned answers.

      While I find the site so helpful, humans that help AI like the team at StackOverflow did deserve to be on the losing end.

      I am absolutely not above cutting off my nose to spite my face.

    • talkingpumpkin@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      21 hours ago

      I went through my answers (and questions) and poisoned them little by little every day with errors

      You are an evil genius (also, a very determined one - I wouldn’t have had the patience).

  • PoisonedPrisonPanda@discuss.tchncs.de
    link
    fedilink
    arrow-up
    44
    arrow-down
    1
    ·
    21 hours ago

    To be honest. (although I am guilty using chatgpt way too often) I have never not found a question + an answer to a similar problem on stackoverflow.

    The realm is saturated. 90 % of the common questions are answered. Complex problems which are not yet asked and answered are probably too difficult to formulate on stackoverflow.

    It should be kept at what it is. An enormous repository of knowledge.

    • Anders429@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      2 hours ago

      This is a really good point. I joined stackoverflow after graduating university a few years ago, and found it really hard to participate. You need karma to be able to vote on stuff or add comments, but the only unanswered questions are often basically unanswerable. I did find some success with adding answers that were better than previous ones, but it was limited, because at that point the site was already declining and there was no one left to upvote my contributions.

    • Gamma@beehaw.org
      link
      fedilink
      English
      arrow-up
      33
      ·
      edit-2
      21 hours ago

      This is a huge reason for the question decline! All the easy stuff has been answered, the knowledge is already there. But people are so used to infinite growth that anything contrary = death lol

      People also blame ai, but if people are going to ai to ask the common already answered questions then… good! They’d just get hurt feelings when their question was closed as a dupe

      • hallettj@leminal.space
        link
        fedilink
        English
        arrow-up
        10
        ·
        18 hours ago

        Yeah, the article seems to assume AI is the cause without attempting to rule out other factors. Plus the graph shows a steady decline starting years before ChatGPT appeared.

      • PoisonedPrisonPanda@discuss.tchncs.de
        link
        fedilink
        arrow-up
        6
        ·
        21 hours ago

        People also blame ai, but if people are going to ai to ask the common already answered questions then… good!

        exactly!

        While I am indeed worried about the “wasted” energy (thats a whole other topic), thats pretty much why AI is good for.

          • Gamma@beehaw.org
            link
            fedilink
            English
            arrow-up
            5
            ·
            20 hours ago

            Stack Overflow has a whole network of Q&A sites. There’s places to post and answer puzzles, code golf, ask physics or political questions, etc. Lots of useful stuff not many people know about

            • ruk_n_rul@monyet.cc
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              17 hours ago

              Fun fact: the math “sub-stackoverflow” is owned by the American mathematical society iirc (do correct if wrong) and they reserve the right to up and leave and set root elsewhere outside the network.

  • ifGoingToCrashDont@lemmy.world
    link
    fedilink
    arrow-up
    20
    arrow-down
    4
    ·
    21 hours ago

    I think it’ll make a comeback eventually. LLMs will get progressively less useful as a replacement as its’ training data stales. Without refreshed data it’s going to be just as irrelevant as the years go on. Where will it get data about new programming languages or solutions to problems in new software? LLM knowledge will be stuck in 2025 unless new training material is given to it.

    • suoko@feddit.it
      link
      fedilink
      arrow-up
      4
      ·
      15 hours ago

      Until someone releases an open LLM in the sense that every prompt/question is published on a forum like site

      • TheTechnician27@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        edit-2
        20 hours ago

        Your analogy simply does not hold here. If you’re having an AI train itself to play chess, then you have adversarial reinforcement learning. The AI plays itself (or another model), and reward metrics tell it how well it’s doing. Chess has the following:

        1. A very limited set of clearly defined, rigid rules.
        2. One single end objective: put the other king in checkmate before yours is or, if you can’t, go for a draw.
        3. Reasonable metrics for how you’re doing and an ability to reasonably predict how you’ll be doing later.

        Here’s where generative AI is different: when you’re doing adversarial training with a generative deep learning model, you want one model to be a generator and the other to be a classifier. The classifier should be given some amount of human-made material and some amount of generator-made material and try to distinguish it. The classifier’s goal is to be correct, and the generator’s goal is for the classifier to pick completely randomly (i.e. it just picks on a coin flip). As you train, you gradually get both to be very, very good at their jobs. But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

        Imagine teaching a 2nd grader the difference between a horse and a zebra having never shown them either before, and you hold up pictures asking if they contain a horse or a zebra. Except the entire time you just keep holding up pictures of zebras and expecting the child to learn what a horse looks like. That’s what you’re describing for the classifier.

        • PoisonedPrisonPanda@discuss.tchncs.de
          link
          fedilink
          arrow-up
          4
          arrow-down
          5
          ·
          19 hours ago

          well. indeed the devil’s in the detail.

          But going with your story. Yes, you are right in general. But the human input is already there.

          But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

          AI can already understand what stripes are, and can draw the connection that a zebra is a horse without stripes. Therefore the human input is already given. Brute force learning will do the rest. Simply because time is irrelevant and computations occur at a much faster rate.

          Therefore in the future I believe that AI will enhance itself. Because of the input it already got, which is sufficient to hone its skills.

          While I know for now we are just talking about LLMs as blackboxes which are repetitive in generating output (no creativity). But the 2nd grader also has many skills which are sufficient to enlarge its knowledge. Not requiring everything taught by a human. in this sense.

          I simply doubt this:

          LLMs will get progressively less useful

          Where will it get data about new programming languages or solutions to problems in new software?

          On the other hand you are right. AI will not understand abstractions of something beyond its realm. But this does not mean it wont expedite in stuff that it can draw conclusions from.

          And even in the case of new programming languages, I think a trained model will pick up the logic of the code - basically making use of its already learned pattern recognition skills. And probably at a faster pace than a human can understand a new programming language.

          • TheTechnician27@lemmy.world
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            1
            ·
            edit-2
            19 hours ago

            Dude, I’m sorry, I just don’t know how else to tell you “you don’t know what you’re talking about”. I’d refer you to Chapter 20 of Goodfellow et al.'s 2016 book on Deep Learning, but 1) it tragically came out a year before transformer models, and 2) most of it will go over your head without a foundation from many previous chapters. What you’re describing – generative AI training on generative AI ad infinitum – is a death spiral. Literally the entire premise of adversarial training of generative AI is that for the classifier to get better, you need to keep funneling in real material alongside the fake material.

            You keep anthropomorphizing with “AI can already understand X”, but that betrays a fundamental misunderstanding of what a deep learning model is: it doesn’t “understand” shit about fuck; it’s an unfathomably complex nonlinear algebraic function that transforms inputs to outputs. To summarize in a word why you’re so wrong: overfitting. This is one of the first things you’ll learn about in a ML class, and it’s what happens when you let a model train on the same data over and over again forever. It’s especially bad for a classifier to be overfitted when it’s pitted against a generator, because a sufficiently complex generator will learn how to outsmart the overfitted classifier and it will find a cozy little local minimum that in reality works like dogshit but outsmarts the classifier which is its only job.

            You really, really, really just fundamentally do not understand how a machine learning model works, and that’s okay – it’s a complex tool being presented to people who have no business knowing what a Hessian matrix or a DCT is – but please understand when you’re talking about it that these are extremely advanced and complex statistical models that work on mathematics, not vibes.

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      10
      ·
      20 hours ago

      I don’t mind so much. It started as a Wiki and then became a corporate AI training ground.

      I think Microsoft bought it, and as with most things they buy, they run it into the ground.