OpenAI’s ChatGPT and Sam Altman are in massive trouble. OpenAI is getting sued in the US for illegally using content from the internet to train their LLM or large language models

  • Geek_King@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    2 years ago

    I very much enjoy ChatGPT and I’m excited to see where that technology goes, but lawsuits like this feel so shaky to me. OpenAI used publicly available data to train their AI model. If I wanted to get better at writing, and I went out and read a ton of posted text and articles to learn, would I need to go ask permission from each person who posted that information? What if I used what I learn to make a style similar to how a famous journalist writes, then got a job and made money from the knowledge I gained?

    The thing that makes these types of lawsuits have a hard time succeeding is proving that they “Stole” data and used it directly. But my understanding of learning models in language and art is that they learn from it more so then use the material directly. I got access to midjourney last year August, and my first thought was, better enjoy this before it gets sued into uselessness. The problem is, people can sue these companies, but this genie can’t be put back into the bottle. Even if OpenAI get hobbled in what they can do, other companies in other countries will do the same and these law suits will stop nothing.

    We’re going to see this technology mature and get baked into literally every aspect of life.

    • crackgammon@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 years ago

      Absolutely agree with you. It’s in theory no different to a child learning from what they’re exposed to in the world around them. But I guess the true desire from some would be to get royalty payments every time a brain made use of their “intellectual property” so I don’t think this argument would necessarily convince.

  • dtc@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    This is interesting. Now wealthy folks can defend their copying of data for personal gain while the concept of content piracy is a criminal offense for the everyday joe, complete with steep fines and sometimes vacations to clubfed.

  • fubo@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 years ago

    If I learned to read from Dr. Seuss books, does that mean that everything I write owes a copyright tariff to the Geisel estate?

  • tallwookie@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    12
    ·
    2 years ago

    if you release data into the public domain (aka, if it’s indexable by a search engine) then copying that data isnt stealing - it cant be, the data was already public in the first place.

    this is just some lawyer trying to make a name for themselves

    • jambalaya@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 years ago

      Just because the data is “public” doesn’t mean it was intended to be used in this manner. Some of the data was even explicitly protected by gpl licensing or similar.

      • tallwookie@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        9
        ·
        2 years ago

        but GPL licensing indicates that “If code was put in the public domain by its developer, it is in the public domain no matter where it has been” - so, likewise for data. if anyone has a case against OpenAI, it’d be whatever platforms they scraped - and ultimately those platforms would open their own, individual lawsuits.

            • Wander@kbin.social
              link
              fedilink
              arrow-up
              0
              ·
              2 years ago

              If you release code under gpl, and I modify it, I’m required to release those modifications publicly under gpl as well.

              • inspxtr@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 years ago

                so if content is under GPL and used for training data, how far is the process of training/fine-tuning considered “modification”? For example, if I scrape a bunch of blog posts and just try to use tools to analyze the language, does that considered “modification”? What is the minimum solution that OpenAI should do (or should have done) here, does it stop at making the code for processing the data public, or the entire code base?

                • Wander@kbin.social
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 years ago

                  I’m not sure. And I’m not sure there’s legal precedant for that either.
                  That’s why I dont have a problem with any of these lawsuits, it gives us clarity on the legal aspects, whichever way it goes.

    • Toothpickjim@lemmy.fmhy.ml
      link
      fedilink
      English
      arrow-up
      22
      ·
      2 years ago

      Not everything indexed by a search engine is public domain that’s not how copyright works.

      There’s plenty that actually is in the public domain but I guess scraping the web is a lot easier for these people

    • ChrisLicht@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      2 years ago

      Let’s note that a NY Magazine article is copyrighted but publicly available.

      If an LLM scrapes that article, then regurgitates pieces of it verbatim in response to prompts, without quoting or parodying, that is clearly a violation of NY Mag’s copyright.

      If an LLM merely consumes the content and uses it to infinitesimally improve its ability to guess the next word that fits into a reply to a prompt, without a series of next-words reproducing multiple sentences from the NY Mag article, then that should be perfectly fine.

    • phoneymouse@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 years ago

      I don’t agree. Purpose and use case should be a factor. For example, my friends take pictures of me and put them on social media to share memories. Those images have since been scraped by companies like Clearview AI providing reverse face search to governments and law enforcement. I did not consent to or agree to that use when my likeness was captured in a casual setting like a birthday party.

  • Carlos Solís@communities.azkware.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 years ago

    About the only two major places on the Internet that explicitly give permission to reuse their users’ content are Wikipedia and Stack Overflow. Add the public domain texts in Project Gutenberg and that’s about it, the rest of the Social Media requires consent from the user to be reused elsewhere. This is the crux of the problem, CEOs just taking data indiscriminately from the Internet just because it’s indexable by Google.

  • cerevant@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    7
    ·
    2 years ago

    So anyone who creates something remotely similar to something online is plagiarizing, got it.

    Folks, that’s how we all do things - we read stuff, we observe conversations, we look at art, we listen to music, and what we create is a synthesis of our experiences.

    Yes, it is possible for AI to plagiarize, but that needs to be evaluated on a case by case basis, just as it is for humans.

  • ssillyssadass@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    3
    ·
    2 years ago

    I don’t think something is stolen if it’s analyzed and used for something new. It never matters if you came up with an idea, only what you do with that idea.

  • cerevant@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    2 years ago

    So anyone who creates something remotely similar to something online is plagiarizing, got it.

    Folks, that’s how we all do things - we read stuff, we observe conversations, we look at art, we listen to music, and what we create is a synthesis of our experiences.

    Yes, it is possible for AI to plagiarize, but that needs to be evaluated on a case by case basis, just as it is for humans.