Criticism of AI taking jobs is a fair take. Even if AI only makes the workflow more efficient, that’s less people that need to be hired in the industry. It’s fair in realizing that AI is valuable, and is doing amazing things, while also criticizing the downsides of using it.
AI is in its infancy. Different techniques are only going to make it better and better at what it does - it will end up taking jobs. But just broadly claiming it’s bad because you can’t read words that already weren’t readable in the source material is silly.
The reactionary take is “AI is useless, nobody should be using it” or “They should label everything that uses AI so I can avoid it!”…those takes are completely reactionary. The takes complaining about the ‘quality’ of the AI upscale, even though it looks FANTASTIC are reactionary. An upscale wouldn’t be available at all if AI hadn’t been produced to do it. So it’s clearly worth using. It’s produced a result that I’m sure MANY are happy with. There’s only a small handful of extraordinarily LOUD individuals making a fuss over it.
I think the biggest criticism of AI is the one that almost NOBODY actually ever complains about: Jobs. Wages are already lower than they have been in 20 years, and NOW it takes EVEN less people to do the same job? Wealth inequality is only going to be exacerbated by AI.
Edit: Also wasn’t me who downvoted your question asking me to define reactionary – so I upvoted to try and counter it, it’s a fair question to ask I think.
I think labeling things made by AI is a reasonable request. In this specific example, someone who’s buying 4K Wallace & Gromit is doing so out of a love of claymation and Aardman’s work in it. They want it in high definition specifically to see the details that went into a handcrafted set and characters. Getting a smoothed over statistical average, when you payed for it expecting the highest quality archive on an artistic work, would be more frustrating than just seeing it in lower definition.
More generally, don’t people working with these models also want AI output to be properly labeled? As I understand it, the model starts to degrade when its output is fed back into itself. With the rapid proliferation of AI posting, I’ve heard you can’t even make large language models with the same level of quality as you could before it was released to the general public.
I’m also kinda skeptical that this stuff has as many applications as are being touted. Like, I’ve seen some interesting stuff for folding proteins or doing statistical analysis of particle physics, but outside highly technical applications… kinda seems like a solution in search of a problem. It’s something investors really really like for their stock evaluations, but I just don’t see it doing much for actual workers. At most, maybe it eliminates some middle-management email jobs.
These models can do a LOT of different things. If you don’t see that, that’s an education problem, not an AI problem.
And combining these capabilities in new and unique ways are only going to make things even more wild. It won’t be very long at all before my “Ummmmm, I’ll have aaaaaaaaa” order at McDonalds doesn’t need to be taken by a human being and there’s just a single dude in the back running the whole place. That’s disruptive on an economic level never before seen. THAT is why companies the world over are so heavily invested in AI. It’s finally reached a threshold to replace real labor - and labor accounts for one of the largest portions of expenditure for companies. The economics of paying for electricity to run this stuff FAR outweighs what it takes to pay a person for the same output.
McDonalds canned their automated ordering experiment, and that was across 100 stores and lasted several years.
I am not convinced this replaces labor. Like any advancement in hardware or software, it can expand the efficiency of labor. But you still need people to do work. People who own things for a living would really really like that not to be the case - their interest in this is not rational decision-making, but deluded optimism.
Criticism of AI taking jobs is a fair take. Even if AI only makes the workflow more efficient, that’s less people that need to be hired in the industry. It’s fair in realizing that AI is valuable, and is doing amazing things, while also criticizing the downsides of using it.
AI is in its infancy. Different techniques are only going to make it better and better at what it does - it will end up taking jobs. But just broadly claiming it’s bad because you can’t read words that already weren’t readable in the source material is silly.
The reactionary take is “AI is useless, nobody should be using it” or “They should label everything that uses AI so I can avoid it!”…those takes are completely reactionary. The takes complaining about the ‘quality’ of the AI upscale, even though it looks FANTASTIC are reactionary. An upscale wouldn’t be available at all if AI hadn’t been produced to do it. So it’s clearly worth using. It’s produced a result that I’m sure MANY are happy with. There’s only a small handful of extraordinarily LOUD individuals making a fuss over it.
I think the biggest criticism of AI is the one that almost NOBODY actually ever complains about: Jobs. Wages are already lower than they have been in 20 years, and NOW it takes EVEN less people to do the same job? Wealth inequality is only going to be exacerbated by AI.
Edit: Also wasn’t me who downvoted your question asking me to define reactionary – so I upvoted to try and counter it, it’s a fair question to ask I think.
I think labeling things made by AI is a reasonable request. In this specific example, someone who’s buying 4K Wallace & Gromit is doing so out of a love of claymation and Aardman’s work in it. They want it in high definition specifically to see the details that went into a handcrafted set and characters. Getting a smoothed over statistical average, when you payed for it expecting the highest quality archive on an artistic work, would be more frustrating than just seeing it in lower definition.
More generally, don’t people working with these models also want AI output to be properly labeled? As I understand it, the model starts to degrade when its output is fed back into itself. With the rapid proliferation of AI posting, I’ve heard you can’t even make large language models with the same level of quality as you could before it was released to the general public.
I’m also kinda skeptical that this stuff has as many applications as are being touted. Like, I’ve seen some interesting stuff for folding proteins or doing statistical analysis of particle physics, but outside highly technical applications… kinda seems like a solution in search of a problem. It’s something investors really really like for their stock evaluations, but I just don’t see it doing much for actual workers. At most, maybe it eliminates some middle-management email jobs.
https://huggingface.co/models
These models can do a LOT of different things. If you don’t see that, that’s an education problem, not an AI problem.
And combining these capabilities in new and unique ways are only going to make things even more wild. It won’t be very long at all before my “Ummmmm, I’ll have aaaaaaaaa” order at McDonalds doesn’t need to be taken by a human being and there’s just a single dude in the back running the whole place. That’s disruptive on an economic level never before seen. THAT is why companies the world over are so heavily invested in AI. It’s finally reached a threshold to replace real labor - and labor accounts for one of the largest portions of expenditure for companies. The economics of paying for electricity to run this stuff FAR outweighs what it takes to pay a person for the same output.
McDonalds canned their automated ordering experiment, and that was across 100 stores and lasted several years.
I am not convinced this replaces labor. Like any advancement in hardware or software, it can expand the efficiency of labor. But you still need people to do work. People who own things for a living would really really like that not to be the case - their interest in this is not rational decision-making, but deluded optimism.