Ahahahahaa!!! I worked with that dude 10 years ago. He was a dickhead then, too.
Do you have any interesting stories with this guy?
It certainly works better than Google, not that that’s a high bar.
Is he saying that AI is going to help us reverse engineer the closed source firmwares of all the ewaste devices that are thrown away every day? Because that’s something I could applaud.
I’m so excited for AI-driven RE. I say this as an reverse engineer myself.
Protein powered LLM
Kill me now.
I poured muscle milk on my server instead of upgrading my GPU so that I could self-host my own LLM. Ask me how that’s going!
How is it going?
Pourly
That’s the most techbro thing I’ve ever read.
Can’t wait for the Fight Milk powered AI!
Now with extra crow eggs!
AI won’t reverse enshitification, but it is one solution in this sense:
The internet is too far gone. It’s all bots and advertising and disinformation and exploitation and data harvesting. People still seem to trust content on the internet for some reason. AI will sort that out by making the internet so unreliable, people will have to distrust it to the extent they should.
It’s really optimistic to think that the greater public will start to think of LLMs output as unreliable and not trustworthy some day.
I’ve seen everyone using AI at work and it’s bad. They’re clueless and just trust it implicitly. I’ve had to correct many mistakes, even made by IT admins who should really know better.
I’d go so far as to say my imposter syndrome has outright died after watching the ineptitude of people as they let AI tooling rot their work.
True, I was really surprised by the amount of people that I considered at least relatively intelligent putting way too much trust in llms answers. I don’t think there’s any hope for humanity anymore and I’m just waiting for skynet to take over now.
Yah, I would encourage people trying to dismiss LLMs and the future to actually try out some of the newer models. People are so fucking cooked, ya’ll have no idea. We’re going to face billions of people plugged into these things, falling in love with them, hanging on their every generated word, using them for literally every possible decision in life.
They’re seductive and powerful and can recognize patterns in your behavior specifically that will startle you and make you question what you know. If you’re smart, you will understand that this has a lot more to do with how simple our minds are than how magical the AI is, but the vast majority of ignorant people are going to lose themselves and just lean on LLM’s a thousand times harder than they lean on their phones and social media right now.
Parents, teach your kids to limit internet time. Please, teach them language and critical thought and not to trust their senses online.
What models specifically?
Enshittification is about maximizing revenue, typically in a way that destroys your product. Examples are ads on subscription video streaming services. Another good example is how some music artists say no to getting paid by Spotify and Spotify then suggests those songs more and collects the money that would have gone to that artist.
How the fuck is AI going to do anything about that? Are we talking Skynet, end of the world type thing? That should do it.
How. Exactly how…?
Because you’re literally saying that this shiny turd could be the lever to reverse enshittification.
So I’d love to know exactly how you were going to do this.
Sigh.
The idea is that the only way to combat AI content is with AI content filters that can recognize plagiarized or copied works.
It’s a predictable offered solution to a problem that companies don’t want to go away. Of course they’re going to suggest more of their product to cure the problem caused by their product. See: The NRA in the US.
The thing is, it’s probably right, at this point unless we all just decide to make a new internet that requires verification of some kind, the only way we maintain the corpse of the internet as it is currently is if we start using really beefy software to auto-moderate content on a much larger scale than just making laws that AI content needs to be tagged, and the AI situation is going to get a LOT worse. Humans are going to have a much harder time figuring out what’s real in very short order, as if it wasn’t already bad enough.
No really, we’re in for some shit, the AI wave isn’t going away, it’s barely getting started.
or some community funded, open source, free version of this could be made
We should have some intermediate protocol for the Internet that on the one hand requires identification to get in and the ability to be licked out by a majority, whilst at the same time guaranteeing anonymity on what sites are being visited and what is being read