I run a small VPS host and rely on PayPal for payments, mainly because (a) most VPS customers pay that way if you aren’t AWS or GoDaddy and (b) very good fraud protection. My prior venture had quite a bit of chargebacks from Stripe so it went PP-only also.

My dad told me I should “reduce the processing fees” and inaccurately cited that ChatGPT told him PayPal has 5% fees when it really has 3-3.5% fees (plus 49 cents). Yet he insisted 5% was the charge.

Yes, PayPal sucks but ChatGPT sucks even more. When I was a child he said Toontown would ruin my brain, yet LLMs are ruining his even more.

  • Creat@discuss.tchncs.de
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    5 days ago

    He didn’t cite wrong information (only) because of ChatGPT, but because he lacks the instinct (or training, or knowledge) to verify the first result he either sees or likes.

    If he had googled for the information and his first click was an article that was giving him the same false information, he would’ve probably insisted just the same.

    LLMs sure make this worse, as much more information coming out of them is wrong, but the root cause is the same it’s been before their prevalence. Coincidentally it’s the reason misinformation campaigns work so well and are so easy.

    Edit: removed distraction

      • Ulrich@feddit.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        This is because ChatGPT doesn’t have “information” in the first place

        What? LOL then why are so many companies suing them for copyright?

        • Robust Mirror@aussie.zone
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          4 days ago

          Because it’s complicated. It is fed that data, but it can’t access it, refer to it, look it up or anything like that. If you feed it all of reddit, you can’t just ask it what comments did this user make, it simply doesn’t know. It uses all the data it’s fed to build statistical patterns of language and concepts, which is what it then outputs.

          This is why it can quote things like Shakespeare, because that information is so widely repeated, it’s fed it so many times, it’s a common pattern it can reliably reproduce. But it isn’t looking in some database and finding that Shakespeare quote to repeat, it doesn’t have that ability or information.

            • Robust Mirror@aussie.zone
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              4 days ago

              If you have it search the internet yes, that’s competely different to its default behaviour though. That’s specifically providing it a document to look at after it has been trained, which it can look at and refer to.

    • Ulrich@feddit.org
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      If he had googled for the information and his first click was an article that was giving him the same false information, he would’ve probably insisted just the same.

      If you’re looking up content written by humans and published to the internet in an article, it is far less likely to be wrong.

      • Creat@discuss.tchncs.de
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 days ago

        It’s a bit less likely to be wrong, but there’s plenty of room for it to be wrong, either maliciously with intent or through incompetence of researching even basic things on their part. Someone being wrong once by misreading, or without interpreting data, or by trying to steer perception of something, can easily snowball into many sources concerning that wrong information (“I’ve read it, so must be true”). Many kinds of information are also very dependant on perspective, adding nuance beyond “correct” and “false”.

        There are plenty of reasons to double check information (seemingly) written by humans, it’s just good to double check that for different reasons than ai content. But the basic idea of “it can easily be wrong” is the same.