They’re trained on technical material too.
They’re trained on technical material too.
Is this an attempt to beat those monopoly allegations?
Or just not show people what you’re typing.
Art isn’t work, it’s speech. It’s part of the human condition. Art is useless, said Wilde. Art is for art’s sake—that is, for beauty’s sake.
I do not make art, I just post it here on lemmy. I’d be OK with that. People freely create, copy, and iterate on memes, and they are the greatest cultural touchstones we have. First and foremost, people create because they have something to say.
People already make memes and mods for free. Humans are a social species and will continue to create and share things until the end of time. Making money off of creation is a privilege for only a tiny few.
That whole page is full of wild shit.
I can’t tell if this is a joke or not.
A computer like that is useful outside of work. I’d pay for it out of pocket if I had to.
The only thing I got from this is that bro loves ads more than anything in the world.
I accept regulations are real, but not all ways to help people require you dealing with regulations. I’m still waiting on that proof by the way.
There are more ways to help people than making medical software. Rather than saying they could focus on doing simpler things, you automatically jumping to all projects running afoul of FDA regulations is pretty telling. All while still having not provided a single project halted by FDA order.
Which projects have been shut down by FDA order?
Open source AI is huge, and I don’t think you need FDA approval to distribute a model. Where are you even getting that from?
What about open source projects?
You should read these two articles from Cory Doctorow. I think they’ll help clear up some thing for you.
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
I just don’t particularly want that work to be used to make new work without the skills necessary to do so well, LLMs/Machine Learning cannot gain those skills because it is not alive and thus it cannot create.
This kind of sentiment saddens me. People can’t look past the model and see the person who put in their time, passion, and knoledge to make it. You’re begrudging someone who took a different path in life, spent their irreplaceable time acquiring different skills and applied them to achieve something they wanted. Because of that, they don’t deserve it, as they didn’t do it the same way you did, with the same opportunities and materials.
The article mentions though that using things ‘without permission’ is how a lot of people became and remain(ed) poor, especially people from marginalised communities, likely from those in power so again, I think we’re on the same page there?
We are, but that’s just one symptom of a larger exploitative system where the powerful can extract while denying opportunities to the oppressed. AI training isn’t only for mega-corporations. We shouldn’t put up barriers that only benefit the ultra-wealthy and hand corporations a monopoly of a public technology by making it prohibitively expensive to for regular people. Mega-corporations already own datasets, and have the money to buy more. And that’s before their predatory ToS allowing them exclusive access to user data, effectively selling our own data back to us.
Regular people, who could have had access to a competitive, corporate-independent tool for creativity, education, entertainment, and social mobility, would instead be left worse off and with less than where we started. We need to make sure this remains a two-way street, corporations have so much to lose, and we, everything to gain. Just look at the floundering cinema industry, weak cable and TV numbers and print media.
Did you read the first one?
Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia: https://www.theguardian.com/books/2009/apr/03/agatha-christie-alzheimers-research
The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software: https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech/
What you want would give Disney broad powers to oppressively control large amounts of popular discourse. I acknowledge that Specific expressions deserve protection and should retain specific rights, and rights they don’t have always enabled ethical self-expression and productive dialogue. Wanting to bar others from analyzing your work to keep them from iterating on your ideas or expressing the same ideas differently is both is selfish and harmful.
You’re against the type of system you desperately want to become. Using things “without permission” forms the bedrock on which artistic expression and free speech as a whole are built upon. I don’t think any state is going to pass a law that guts the core freedoms of art, research, and basic functionality of the internet and computers.
You should read these two articles from Cory Doctorow. I’d like to hear your thoughts.
https://pluralistic.net/2024/05/13/spooky-action-at-a-close-up/#invisible-hand
As long as your AI doesn’t somehow infringe on your training data, you’re allowed to use whatever you want, just like reviewers, analysts, and indexers do.