• 1 Post
  • 8 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle

  • I said it was a neural network.

    You said it wasn’t.

    I asked you for a link.

    You told me to do your homework for you.

    I did your homework. Your homework says it’s a neural network. I suggest you read it, since I took the time to find it for you.

    Anyone who knows the first thing about neural networks knows that, yes, artificial neurons are simulated with matrix multiplications, why is why people use GPUs to do them. The simulations are not down to the molecule becuase they don’t need to be. The individual neurons are relatively simple math, but when you get into billions of something, you don’t need extreme complexity for new properties to emerge (in fact, the whole idea of emergent properties is that they arise from collections of simple things, like the rules of the Game of Life, for instance, which are far simpler than simulated neurons). Nothing about this makes me wrong about what I’m talking about for the purposes of copyright. Neural networks store concepts. They don’t archive copies of data.





  • Except an AI is not taking inspiration, it’s compiling information to determine mathematical averages.

    The AIs we’re talking about are neural networks. They don’t do statistics, they don’t have databases, and they don’t take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is. Nothing about an artificial neuron ever takes an average of anything, reads any database, or does any statistical calculations. If an artificial neural network can be said to be doing those things, then so is the human brain.

    There is nothing magical about how human neurons work. Researchers are already growing small networks out of animal neurons and using them the same way that we use artificial neural networks.

    There are a lot of “how AI works” articles in there that put things in layman’s terms (and use phrases like “statistical analysis” and “mathematical averages”, and unfortunately people (including many very smart people) extrapolate from the incorrect information in those articles and end up making bad assumptions about how AI actually works.

    A human being is paid for the work they do, an AI program’s creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he’s profitting off this AI program.

    If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they’re making a profit from that copyrighted work. Human beings should, as you said, be paid for the work they do. Right? If an artist goes to art school, they should pay all of the artists whose work they learned from, right? If a teacher teaches children in a class, that teacher should be paid a royalty each time those children make use of the knowledge they were taught, right? (I sense a sidetrack – yes, teachers are horribly underpaid and we desperately need to fix that, so please don’t misconstrue that previous sentence.)

    There’s a reason we don’t copyright facts, styles, and concepts.

    Oh, and if you want to talk about something that stores an actual database of scraped data, makes mathematical and statistical inferences, and reproduces things exactly, look no further than Google. It’s already been determined in court that what Google does is fair use.


  • Losing their life because an AI has been improperly placed in a decision making position because it was sold as having more capabilities than it actually has.

    I would tend to agree with you on this one, although we don’t need bad copyright legislation to deal with it, since laws can deal with it more directly. I would personally put in place an organization that requires rigorous proof that AI in those roles is significantly safer than a human, like the FDA does for medication.

    As for the average person who has the computer hardware and time to train an AI (bear in mind Google Bard and Open AI use human contractors to correct misinformation in the answers as well as scanning), there is a ton of public domain writing out there.

    Corporations would love if regular people were only allowed to train their AIs on things that are 75 years out of date. Creative interpretations of copyright law aren’t going to stop billion- and trillion-dollar companies from licensing things to train AI on, either by paying a tiny percentage of their war chests or just ignoring the law altogether the way Meta always does, and getting a customary slap on the wrist. What will end up happening is that Meta, Alphabet, Microsoft, Elon Musk and his companies, government organizations, etc. will all have access to AIs that know current, useful, and relevant things, and the rest of us will not, or we’ll have to pay monthly for the privilege of access to a limited version of that knowledge, further enriching those groups.

    Furthermore, if they’re using people’s creativity to make a product, it’s just WRONG not to have permission or to not credit them.

    Let’s talk about Stable Diffusion for a moment. Stable Diffusion models can be compressed down to about 2 gigabytes and still produce art. Stable Diffusion was trained on 5 billion images and finetuned on a subset of 600 million images, which means that the average image contributes 2B/600M, or a little bit over three bytes, to the final dataset. With the exception of a few mostly public domain images that appeared in the dataset hundreds of times, Stable Diffusion learned broad concepts from large numbers of images, similarly to how a human artist would learn art concepts. If people need permission to learn a teeny bit of information from each image (3 bytes of information isn’t copyrightable, btw), then artists should have to get permission for every single image they put on their mood boards or use for inspiration, because they’re taking orders of magnitude more than three bytes of information from each image they use for inspiration on a given work.



  • Reddit’s far left can be pretty toxic too. As an old liberal myself, I don’t believe that there are any good kinds of hate or discrimination, but if you argue against that kind of crap, the absolute worst people come out to defend it. A good chunk of my negative interactions have been with those people.

    That being said, the Eternal September is real. I don’t know anyone in real life who actually thinks like that. The trouble is, if you have ten million users, a tenth of a percent of them could be assholes and that’s still 10,000 obnoxious assholes.