• archomrade [he/him]@midwest.social
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    3
    ·
    1 year ago

    Copyright is already just a band-aid for what is really an issue of resource allocation.

    If writers and artists weren’t at risk of loosing their means of living, we wouldn’t need to concern ourselves with the threat of an advanced tool supplanting them. Nevermind how the tool is created, it is clearly very valuable (otherwise it would not represent such a large threat to writers) and should be made as broadly available (and jointly-owned and controlled) as possible. By expanding copyright like this, all we’re doing is gatekeeping the creation of AI models to the largest of tech companies, and making them prohibitively expensive to train for smaller applications.

    If LLM’s are truly the start of a “fourth industrial revolution” as some have claimed, then we need to consider the possibility that our economic arrangement is ill-suited for the kind of productivity it is said AI will bring. Private ownership (over creative works, and over AI models, and over data) is getting in the way of what could be a beautiful technological advancement that benefits everyone.

    Instead, we’re left squabbling over who gets to own what and how.

    • Franzia@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      fourth industrial revolution" as some have claimed

      The people claiming this are often the shareholders themselves.

      prohibitively expensive to train for smaller applications.

      There is so much work out there for free, with no copyright. The biggest cost in training is most likely the hardware, and I see no added value in having AI train on Stephen King ☠️

      Copyright is already just a band-aid for what is really an issue of resource allocation.

      God damn right but I want our government to put a band aid on capitalists just stealing whatever the fuck they want “move fast and break things”. It’s yet another test for my confidence in the state. Every issue, a litmus test for how our society deals with the problems that arise.

      • archomrade [he/him]@midwest.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        There is so much work out there for free, with no copyright

        There’s actually a lot less than you’d think (since copyright lasts for so long), but even less now that any online and digitized sources are being locked down and charged for by the domain owners. But even if it were abundant, it would likely not satisfy the true concern here. If there was enough data to produce an LLM of similar quality without using copyrighted data, it would still threaten the security of those writers. What is to say a user couldn’t provide a sample of Stephen King’s writing to the LLM and have it still produce derivative work without having trained it on copyrighted data? If the user had paid for that work, are they allowed to use the LLM in the same way? If they aren’t who is really at fault, the user or the owner of the LLM?

        The law can’t address the complaints of these writers because interpreting the law to that standard is simply too restrictive and sets an impossible standard. The best way to address the complaint is to simply reform copyright law (or regulate LLM’s through some other mechanism). Frankly, I do not buy that the LLM’s are a competing product to the copyrighted works.

        The biggest cost in training is most likely the hardware

        That’s right for large models like the ones owned by OpenAI and Google, but with the amount of data needed to effectively train and fine-tune these models, if that data suddenly became scarce and expensive it could easily overtake hardware cost. To say nothing for small consumer models that are run on consumer hardware.

        capitalists just stealing whatever the fuck they want “move fast and break things”

        I understand this sentiment, but keep in mind that copyright ownership is just another form of capital.

        • Franzia@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Thanks for this reply. You’ve shown this issue has depth that I’ve ignored because I like very few of the advocates for the AI we’ve got.

          So one thing that trips me up is I thought copyright is about use. As a consumer rather than a creator this makes complete sense - you can read it, if you own it or borrowed it, and do not distribute it in any way. But there are also gentleman’s agreements built in to how we use books and digital prints.

          Unintuitively, copying is also very important. Artists copy to learn, for example. Musicians have the right to cover anyone’s music. Engineers will deconstruct. and reverse engineer another’s solution. And businesses cheat off of one another all the time. Even when it has been proven to be wrong, the incentive is high.

          So is taking the text of the book, no matter how you got it, and using it as part of a new technology okay?

          Clearly the distribution isn’t wrong. You’re not distributing the book, you’ve made a derivative.

          The ownership isn’t there, I mean the works were pirated. We’ve been taught that simply having something that was gotten through online copying is not only against the ‘rightholder’ but “piracy” and “stealing”. I have a really simplistic view of this - I just want creators paid for their work, and have autonomy (rights) over what is done with their work. This is rarely the case, we live in a world with publishers.

          So it’s that first action. Is that use of the text in another work legal?

          My basic understanding of fair use is that fair use is when you add to a work. You critique or reuse that work. Your work is about the other work, but also something new that stands on its own like an essay or a collage, rather than a collection.

          I am so confused. Text based AI is run by capitalists. And we only have it FOSS because META can afford to lose money in order to remove OpenAI from the competition. Image based AI is almost certainly wrong, it copied and plugged in all of this other work and now tons of people are suing, Getty images is leveraging their rights management to make an AI that follows the rules we are living with. My gut reaction is a lot of people deserve royalties.

          But in the other hand it sounds like AI did not work until they gave it the entire internet worth of data to train on. Training on smaller, legal sets was a failure? Or maybe it was because they took the tech approach of training the AI on every google image of dogs, or cats, etc. Without any real variation. Because they’re engineers, not artists. And not even good engineers, if their best work is just scraping other people’s work and giving it to this weird computer program.

          This is all just stealing, right? But stealing is a lot more legal than I thought, especially when it comes to digitally published works of art, or physically published art that’s popular enough to be shared online.