• freewheel@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Sure, but that particular horse has left the barn. There will be cases where identification is easy(-ier) but as shown in Oracle v Google, there are only so many ways to express ideas in code.

    For example, I just asked Claude 2 “Write a program in C to count from 1 to some arbitrary number specified on the command line.” Can you tell me the origin of this line from the result?

    for(int i=1; i<=n; i++) {

    I mean, if it’s from a copyrighted work, I certainly don’t want to use it in an open-source project!

    EDIT: Guessing there’s a bug in HTML entity handling.

    • thebestaquaman@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      edit-2
      1 year ago

      Of course, once the AI is trained, you can’t look at some arbitrary output and determine whether that specific output came due to some specific training data set. In principle, if some of your training data is found to violate copyrights you either have to compensate the copyright holder or re-train the model without that data set.

      Finding out whether a copyrighted work is part of the training data is a matter of going through it, and should be the responsibility of the people training the model. I would like to see a case where it has been shown that a copyrighted dataset has been used to train a model, and those violating the copyright by doing so are held responsible.

      • freewheel@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I agree that under the current system of “idea ownership” someone needs to be held responsible, but in my opinion it’s ultimately a futile action. The moment that arbitrary individuals are allowed to download these models and use them independently (HuggingFace, et al), all control of whatever is in the model is lost. Shutting down Open AI or Anthropic doesn’t remove the models from people’s computers, and doesn’t eliminate the knowledge of how to train them.

        I have a gut feeling this is going to change the face of copyright, and it’s going to be painful. We collectively weren’t ready.

    • Neato@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      1 year ago

      It’s not over and done with. Pass regulation saying every AI accessible w/in the country has to have a publicly available dataset. That way people can see if their works have been stolen or not. When we inevitably see works recreated wholesale without proper copyright, the AI creators can be sued or fined.

      • freewheel@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Couple of things here - what do you do with the open source models already published? There’s terabytes of data encapsulated in those. Some have published corpora, some don’t. How do you plan to determine that a work comes from an unregistered AI?

        Also, with respect to “within the country” - VPNs exist. TOR exists. SD cards exist. What’s your plan to control the flow of trained models without violating civil rights?

        This is a teflon slope covered in oil. (IMO)

        • Neato@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          If they don’t publish what their training data is, they should be considered violating copyright. The world governments can block sites if they want. It’s hard to swat down all of the random wikis and such but major AI competitors wouldn’t be a big problem.

          • FaceDeer@kbin.social
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            1 year ago

            “Innocent until proven guilty” is a rather important foundation for most justice systems. You’re proposing the exact opposite.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        That way people can see if their works have been stolen or not.

        Firstly, nothing at all is being “stolen.” The words you’re looking for are “copyright violation.”

        Secondly, it does not currently appear that training an AI model on published material is a copyright violation. You’re going to have to point to some actual law indicating that. Currently that sort of thing is generally covered by fair use.