Apple is leagues ahead of Microsoft with the whole AI PC thing and so far it has yet to mean anything. I don't think consumers care at all about running AI, let alone running AI locally.
I'd try the whole AI thing on my work Macbook but Apple's built-in AI stuff isn't available in my language, so perhaps that's also why I haven't heard anybody mention it.
People don’t know what they want yet, you have to show it to them. Getting the hardware out is part of it, but you are right, we’re missing the killer apps at the moment. The very need for privacy with AI will make personal hardware important no matter what.
Two main factors are holding back the "killer app" for AI. Fix hallucinations and make agents more deterministic. Once these are in place, people will love AI when it can make them money somehow.
How does one “fix hallucinations” on an LLM? Isn’t hallucinating pretty much all it does?
No no, not at all, see: https://openai.com/index/why-language-models-hallucinate/ which was recently featured on the frontpage - excellent clean take on how to fix the issue (they already got a long way with gpt-5-thinking-mini). I liked this bit for clear outline of the issue:
´´´Think about it like a multiple-choice test. If you do not know the answer but take a wild guess, you might get lucky and be right. Leaving it blank guarantees a zero. In the same way, when models are graded only on accuracy, the percentage of questions they get exactly right, they are encouraged to guess rather than say “I don’t know.”
As another example, suppose a language model is asked for someone’s birthday but doesn’t know. If it guesses “September 10,” it has a 1-in-365 chance of being right. Saying “I don’t know” guarantees zero points. Over thousands of test questions, the guessing model ends up looking better on scoreboards than a careful model that admits uncertainty."´´´´
Coding agents have shown how. You filter the output against something that can tell the llm when it’s hallucinating.
The hard part is identifying those filter functions outside of the code domain.
It's called a RAG, and it's getting very well developed for some niche use cases such as legal, medical, etc. I've been personally working on one for mental health, and please don't let anybody tell you that they're using an LLM as a mental health counselor. I've been working on it for a year and a half, and if we get it to production ready in the next year and a half I will be surprised. In keeping up with the field, I don't think anybody else is any closer than we are.
Other than that, Mrs. Lincoln, how was the Agentic AI?
You can’t fix the hallucinations
We've shown people so many times and so forcefully that they're now actively complaining about it. It's a meme.
The problem isn't getting your Killer A I App in front of eyeballs. The problem is showing something useful or necessary or wanted. AI has not yet offered the common person anything they want or need! The people have seen what you want to show them, they've been forced to try it, over and over. There is nobody who interacts with the internet who has not been forced to use AI tools.
And yet still nobody wants it. Do you think that they'll love AI more if we force them to use it more?
And yet still nobody wants it.
Nobody wants the one-millionth meeting transcription app and the one-millionth coding agent constantly, sure.
It a developer creativity issue. I personally believe the creativity is so egregious, that if anyone were to release a killer app, the entirety of the lackluster dev community will copy it into eternity to the point where you’ll think that that’s all AI can do.
This is not a great way to start off the morning, but gosh darn it, I really hate that this profession attracted so many people that just want to make a buck.
——-
You know what was the killer app for the Wii?
Wii Sports. It sold a lot of Wiis.
You have to be creative with this AI stuff, it’s a requirement.
[dead]