The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.
This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.
There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.
In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.
Archive link: https://archive.ph/spnT6
That’s not the AI problem. The AI problem is that it tracks the user to benefit the company’s owner. Fuck that. Pure and simple. Take the AI out and put Linux on it. I’m tired of Amazon loading up Toilet paper on my cart before I ever know I’m running out of it. Sounds helpful, but stretches to some really evil shit like getting deported because your friends got deported and AI figured out you were here on the wrong visa. That shit is evil.
This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.
I have never and will never interact with my phone by speaking to it and I don’t want to be around other people who are doing that. The beauty of a touch screen and buttons is you can silently operate the device. Software can always be updated. They should be focusing on hardware features if they want to be innovative. Maybe they could start by adding back some of the shit they’ve removed.
I have never and will never interact with my phone by speaking to it and I don’t want to be around other people who are doing that.
Out of context this statement is hilarious.
It used to be that speaking to a phone was the only way to interact with it.
I have used voice commands. “Hey Google, show me the way to X,” on the way to my car, or “Hey Google, call X” when I have to call a place I don’t know the number to. But I rarely do anymore, as Gemini takes longer to execute than it previously did. And the idea that a five second series of “speak command, register, and execute” will go even further and replace a tap to start an app or something, is hilariously bad. It’s like they never used the AI they were shoving into everything.
the bar is so low that even a lean secure android OS without bloatware would be revolutionary.
I agree but I suspect that the problem is that people have different opinions on where the line is on that. Presumably somebody, somewhere actually plays that stupid candy crush thing on Windows for example. It’s probably a ‘valuable service’ for it to be pre installed for them.
I kinda hate them but they’re allowed to like it.
I could live with pre installed apps as long as they can be removed… i remember having useless apps like google music, youtube, weird browsers and other random apps that could not be removed, I could only uninstall the updates but the base version would remain… That stuff is predatory if i do not use them why should i be forced to have them on my phone.
Yup. I remember when the iPhone first appeared, my first one was the 3GS and they had so much pre-installed nonsense. It’s very frustrating.
The this isn’t on topic necessarily but if you wanna know what they are betting on for ai look into contentcyborg.ai
They wanna flood the internet with fake people, opinions, engagement etc. This creates a feedback loop of marketing budgets flooding social media for the engagement frenzy and creating ideological Dutch disease where anything will be said for a buck. We’re already there obviously culture wise, but now we’re offshoring fake souls I guess.
I mean…. Anyone could have told you that.
You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you.
Ah, the promise made by every futurist ever.
They’re always wrong. New inventions are used to unemploy people, insert themselves between you and what you want to extract money, or to try to sell you something.
I’m reminded of back in the day, when people made similar promises of the personal computer back in the late 1900s. You could have the computer in your living-room, and it would check your stocks, write your letters, and do your shopping for you, without you having to lift a finger.
This just reminds me of the blockchain/NFT craze. NFT is stupid as shit but blockchain has its uses, just like LLMs. I refuse to call it AI because it’s not, it’s a language generator. A particularly expensive language generator that cost a lot of in terms of resources but still just a language generator. It’s not all that different from the crypto craze, especially if you want a GPU for other things.
As you say, LLMs have really useful applications. The problem is that “being a reliable virtual assistant” is not one of them. This current push is driven by shareholders and companies who are afraid to be seen as missing out. It’s the classic case of having what you think is a solution and trying to find the problem, rather than starting from a problem and trying to find a solution.
I use LLMs for some things and they are great, but to be honest, I haven’t seen any real world usage of blockchains besides cryptocurrencies and NFTs.
I would argue that they moved to LLMs because they had run out of ideas on actually improving cellphones. It wasn’t that they were distracted by them. They are trying to distract us because they need to cell new phones every year and nothing they’ve come up with is really justifying shelling out $1200 for a phone that’s virtually the same as the previous 3-5 iterations.
Weird. Couldn’t they ask AI what features to develop next based on brand reputation, time and resources?
This “new phone every year” is the worst consumer crapfest we have going. AI features feel like clutching at straws when seemingly everyone hates the battery life on every single phone. Slap a larger battery in there? Well now you get shit AI that burns whatever extra capacity was gained. I can’t name a single quality on an iPhone model from the last 6 years that I truly wanted, other than the size of my 13 mini. It works fine and it fits in my pocket. Now make one that stays on for a full 24 hours and doesn’t need a battery replacement every 2 years.
Blame the isheep for purchasing every crap offered.
There are plenty on Android as well and they also existed before smartphones.
Me breathing a sigh of relief for still using my S10.
It makes calls, send texts and I can read Lemmy with the app. What more do I need?
Love my S8, though there are apps I can’t run anymore because of how old the OS is.
Still, I’m keeping it till it dies.
LGV20 gang. I dread the day that my work apps stop working because the android version is too old.
I’m not sure if that’s a typo or brilliant. They need to “cell” new phones every year, indeed.
Celling cell phones is indeed profitable.
I’ve been using a Sunbeam flip phone for a year or so. Paid for the phone up front, and pay $3/mo for use of maps, speech recognition, and continued bugfixes.
Even if phones never got new features, dev time still needs to be committed to security updates, and services (like Siri) need to be paid for. The model of getting 100% of your revenue from new phone sales is starting to break. If I could pay $3/mo for Siri or whatever and never have my phone go obsolete, I think that’d be a good deal.
What the heck are you on about. That’s the worst possible solution to this, are you some sort of masochistic?
If Siri is something that needs to be paid for, don’t bundle it with the system. Charge extra from the start, and people can opt in to that shit.
Also, they run a massively profitable software store, and THAT is what justifies and pays for the bug fixing and security patches to the overall OS.
The “cell a year” practice isn’t to cover development costs, it’s to bring in massive profit by milking the consumeristic herd that buys their crap.
Heh forgot about the App Store.
Maybe a bad example, but there is certainly a trend recently of purpose built hardware with “free” services failing to justify the expenses of the necessary backend infrastructure getting turned into useless landfill.
Car Thing, Facebook Portal, and this dumb little treat dispensing dog webcam that I used to have come to mind.
Everyone hates subscriptions, but when it comes to hardware that needs to generate revenue to function, I think a token dollar or so a month is appropriate.
Edit: also thinking about it more, core OS software features that are arbitrarily linked to new hardware (like Apple Intelligence) are definitely designed to sell more phones over just selling more software on existing phones. I think it’s fair to say that there’s a revenue link there.
I’m a firm believer that hardware must never be linked to any sort of subscription to function. If it does, then it’s because the hardware only serves as a way to access the content, but in that case, it must allow competition between providers for that content.
If I buy something, it’s mine. No one should be allowed to dictate how I use it, I want to be free to do what I want with it.
It’s more boring than this, I think. The AI fomo is real, so they cram that in rather clumsy and ultimately pointless. But there were so many missed opportunities on Apple and Samsung flagships this year and it boils down to the capitalistic urge to save money and charge customers the same, and having no real competition. OPPO, one plus, vivo all have better devices, but importing them and getting them to work on US carriers is basically not possible. Not to mention the incentives the carriers throw at you to keep you locked in to that manufacturer.
Typing and tapping would soon be passé,
The tech certainly isn’t ready for this. My voice input to chatgpt gets automatically translated into Welsh.
Some More News had the right take on this: all these companies just dumped (either in investment or development) (hundreds of) billions of dollars into AI development.
The problem is, we’re still 10-15 years away from AI being actually useful in gadgets and stuff. But these companies want to get paid now, so they’re shoving the cheapest, shittiest “functional” AI onto the market just to try and recoup some losses. And it’s painfully apparent it isn’t working.
Just look at how ppl use their smart speakers. They ask it to set timers or ask for the weather. AI will be the norm once the benefit is obvious to everyone. When I can trust my AI with my credit card info and allow it to purchase stuff for me. Right now AI is basically a self-organizing dictionary which is often confidently incorrect. Not once has GPT told me it didn’t know something.
I asked chatGPT about a quote from Iain Banks - The Player of Games. It claims not to know about it’s contents except for the cover blurb. Bullshit.
I fed it a detail and it suddenly remembered.
They must have programmed chatgpt to deny that it has read copyright works.
Deepseek had no such qualms. It couldn’t give an exact quote but it did give what it called an approximation.
As a total aside: The baader-meinhof phenomenon at play. Just yesterday I was talking about Lain Banks because his work was quoted in a video game. And here he shows up again.
Generations* Let’s not forget we produce 3 or 4 models of phones a year, per manufacturer. That’s an alarming planet amount of E-waste and we don’t have the raw materials to keep up this pace forever.
And I still can’t find a phone that has a replaceable battery, proper IP rating, and doesn’t cost an arm and a leg, alternatively, costs thrice as much as the potato display and CPU would warrant. You can get two of the things, but not all three. I won’t even begin to speak of having an unlocked bootloader, or, while having the rest in place, also a flush camera. FFS I’d be fine with no camera I just don’t want a hump. I’d be fine with 720p, it’s a tiny screen after all, but good contrast and not 8k doesn’t seem to be a thing that companies think anyone would be interested it.
Stop fucking innovating, just apply lessons already learned. Design a phone with the mindset of designing a bottle opener.
Depends on what you mean by forever. Who knows what tomorrow brings. We could be smashed back to the stone age, and effectively extinct, sometime next week.
Honestly yeah, none of the crap being made right now is going to appear relevant in the future, just like 3d tvs
3d tvs is my favorite analogy. Easiest way to illustrate the bubble of hype.
That’s the saddest part, I loved my 3dtv until they stopped making media for it. It was a fun gimmick, but I was definitely not “most consumers” lol
Same with curved TVs. I knew a guy who dropped $2500 on a 65" Samsung Curved TV thinking it was the coolest thing ever. He mounted it on his wall and quickly realized he made a mistake because of the horrible viewing angles. Unfortunately for him, he threw away the box at the store because it would t fit in his car, so he was stuck with it.
I’ve heard it put very well that AI is either having a Napster moment in which case we will not recognise the world 10 years from now, or it’s having an iPhone moment and it will get marginally better at best but is essentially in it’s final form.
I personally think it’s more like 3D movies and in 20 years when it comes back around we’ll look at this crap like it was Red and Blue glasses.
I think it’s iphone stage. We’ve had predictive text in some form or other for a long time now. But that’s just LLMs. Can’t speak for the image/video generators, but I expect those will become another tool in the box that gets better but does the same thing.
I just can’t see a whole lot of improvement in these products making any changes top how we use them already.
Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.
How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.
When LLMs get things wrong they aren’t hallucinating. They are bullshitting.
source: https://thebullshitmachines.com/lesson-2-the-nature-of-bullshit/index.html
guess, and sound confident while doing it.
Right, and that goes for the things it gets “correct” as well, right? I think “bullshitting” can give the wrong idea that LLMs are somehow aware of when they don’t know something and can choose to turn on some sort of “bullshitting mode”, when it’s really all just statistical guesswork (plus some preprogrammed algorithms, probably).
Of course, and that’s why they need an anti-bullshit step that doesn’t currently exist. I still believe it’s possible to reign LLMs in, by maximizing their strengths and minimizing their weaknesses.
they need an anti-bullshit step that doesn’t currently exist.
This will never exist in a complete form. Wikipedia doesn’t have this solved; randomly generated heuristics will certainly never have it either.
I’m not sure humans can do it in a complete form. But I believe that is possible to approach human levels of confidence with AI.
… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
I read that as “if you do the thinking for them, LLMs are quite good”
Well that thinking flow can be automated, as far as we have seen. The Chain of Thoughts and Atom of Thoughts paradigms have been very successful and don’t require human intervention to produce improved results.
improved, but still bullshit
Detecting a hallucination programmatically is the hard part. What is truth? Given an arbitrary sentence, how does one accurately measure the truthfulness of it? What about the edge cases, like a statement that is itself true but misrepresents something? Or what if a statement is correct in a specific context, but generally incorrect?
I’m an AI optimist but I don’t see hallucinations being solved completely as long as LLMs are statistical models of languages, but we’ll probably have a set of heuristics and techniques that can catch 90% of them.
I mean, in the end, I think it’s literally an unsolvable problem of intelligence. It’s not like humans don’t “hallucinate” ourselves. Fundamentally your information processing is only as good as the information you get in, and if the information is wrong, you’re going to be wrong. Or even just mistakes. We make mistakes constantly, and we’re the most intelligent beings we know of in the universe.
The question is what issue exactly we’re attempting to solve regarding AI. It’s probably more useful to reframe it as “The AI not lying/giving false information when it should know better/has enough information to know the truth”. Though, even that is a higher bar than we humans set for ourselves
Yeah, like, have you ever met one of those crazy guys who think the pyramids were literally built by aliens? Humans can get caught in a confidently wrong state as well.
We used to call those the AI winters. Barely any progress for years until someone has a great idea and suddenly there is a new form of AI and a new hype cycle again ending I in AI winter.
In a few years, somebody will find a way that leaves LLM in the dust but comes with its own set of limitations.
AI image generation is pretty cool. If it’s used in moderation and as a test bed. It’s a tool, not a complete piece of work imo.
I could see text gen being usful for some things. But i feel like it can very easily and sloppily be a crutch. If it’s used in the same spirit as a spreadsheet I’d feel better about it.
LLMs are just ridiculous to me.
I’m curious as to what the opinion of AI will be in 10 years
I’m betting the same opinion we have today about 3D TVs
Vr, crypto
All good tech but somehow it didn’t land for various reasons. But root chase excessive hype and Nada on actual product delivery…
Blockchain 10 years ago was hyped like AI now.
Blockchain is now used by the US president to make money in barely legal ways.
That’s not a good outlook.
Probably the same as we have now, “be neat if and when it eventually arrives”.
I haven’t gotten anything of use from Apple Intelligence. Even just using it is difficult, and Siri is possibly dumber than she was before.
siri has not been integrated with AI yet. they pushed that to 2026.
Based on what I’ve seen of my partners phone, it provides an assessment of text messages. Why would someone want that?
I’ve used the “writing tools” extensively for minor changes, like changes to capitalization on a large block of text. It makes the phone a little less of a consumption-only device.
I’ve also found the image editing tools handy from time to time, and the automatic calls to ChatGPT on the more complex natural-language questions can sometimes be handy, even if you need to wait a while for the response.
The notification summaries are sometimes very handy and sometimes absurdly incorrect and misleading.
I’m really looking forward to Siri being less frustratingly stupid, but we’ve got a while to wait for that, and we probably shouldn’t set our expectations too high. I do respect that they’ve not shipped it rather than shipping something broken, though.