Deep breath. There’s no sense in trying to outcompete Google in burning cash. They’ve got time to wait until there’s the beginning of commodification of the tech, and a large profitable market to be had.
Deep breath. There’s no sense in trying to outcompete Google in burning cash. They’ve got time to wait until there’s the beginning of commodification of the tech, and a large profitable market to be had.
Or, apples just so bad at this they’re fumbling the bag. Billions in cash on hand each quarter but don’t have the balls that zuck has to pay unreasonable money. They have their own hardware like google does but are talking about perplexity??? They have all data but can’t seem to get an llm that can set an alarm and be a chatbot at the same time?
Sometimes company’s just don’t do good enough.
> Billions in cash on hand each quarter but don’t have the balls that zuck has to pay unreasonable money
It remains to be seen whether this was a smart move, or just flailing money at the wall
The difference is it’s a move. Actually doing something rather than putting out internal PR.
Zuck tried and flailed with the metaverse. That was a huge waste, but he can afford it and fortune favours the brave.
You don’t think Apple makes moves?
Not everyone has to make the same move at the same time.
Apple did many - Just not the right or good ones in the past decade.
Services (icloud and music and tv)/airpods/watch/M processors and the new modem seem like good ones.
If those don’t seem like right or good moves, I can’t imagine much will impress you in this world.
The Metaverse was a waste of billions of dollars to develop a product that nobody wanted. In no world was that a smart business move, or one that should be emulated. Doing nothing is better than flushing money down the toilet.
> "They have all data but can’t seem to get an llm that can set an alarm and be a chatbot at the same time?"
This is actually one of the hardest frontier problems. The "general purpose" assistant is one of the singular hardest technical problems with LLMs (or any kind of NLP).
I think people are easily snowed by LLMs' apparent linguistic fluency that they impute that to capability. This cannot be further from the truth.
In reality a LLM presented with a vast array of tools has extremely poor reliability, so if you want a thing that can order delivery and remember your shopping list and remind you of your flight and play music you're radically exceeding the capabilities of current models. There's a reason successful (anything that isn't demoware/vaporware) uses of agentic LLMs tend to narrow-domain use cases.
There's a reason Google hasn't done it either, and indeed nor has anyone else: neither Anthropic nor OpenAI have a general purpose assistant (defined as being able to execute an indefinite number of arbitrary tools to do things for you, as opposed to merely converse with you).
You split up the tasks into sub agents. This is something my company builds on top of langgraph.
Sure, go try it and evaluate it rigorously end-to-end, over a sufficient number and variety of tools.
For the purposes of the exercise, let's conservatively say, maybe ~2000 tools covering ~100 major verticals of use cases. Even that may be too narrow for a true general purpose assistant, but it's at least a good start. You can slice the sub-agents however you'd like.
If you can get recall, for real user utterances (not contrived eval utterances authored by your devs and MLEs), over 70% across all the verticals/use cases/tool uses, I'd be extremely impressed. Heck, my thoughts on this won't matter - if you can get the recall for such a system over the bar you'd have cracked something nobody else has and should actively try to sell it to Google for nine figures.
Yeah, it turns out many nerds don't consider the fact that the amazing tools we are using to do constrained tasks aren't that great for more general purpose things. Writing a spike, spitting out unit tests, or vibe coding a front end feature is not the same as planning a trip to europe, balancing accounts, or managing a schedule.
So much attention, effort, and tooling has focused on getting llms better at writing more and more code. They can grep and curl and run scripts and iterate and build things really fast, and maybe even maintain it if given enough guardrails and direction.
But it turns out we have had a _ton_ of useful training data for models to work with for software. Not just books or docs, but examples, tests, snippets and full programs for just about any language. Show me a stackoverflow with playwright scripts or API calls (hah, as if thats possible) to build itineraries from delta, aa, united, priceline, expedia, etc, .... which is one part of one piece of the ai assistant pipe-dream.
I don't think its impossible as these tools get much smarter and more generally capable that we get decent assistants in other constrained, non-software domains, but it will take very good companies focusing on it for a long time. Much like any product that try to do these sorts of things.
Its so easy for programmers in our bubble to overlook the complexity involved in automating or even _describing_ simple tasks that humans navigate everyday via habit, learning, experience, and perception...all things that llms struggle with constantly.
Just to once again bring you up to speed with where the markets at, the thing you originally called out as being difficult is a solved problem.
There’s not just one specific solution to it either, there’s a whole class of tooling for it. And I doubt google would pay 9 figures for something that’s built on top of libraries they put out using models they developed.
As of August 1st ‘we’ (as in, I personally developed with my company, and have been paid for with real dollars which are now sitting in my bank) have a F100 using this tech in production.
As for the no true Scotsman fallacy you’re putting in front of yourself, I will let you deal with that but I would like to see how you came up with the maths.
> They have all data but can’t seem to get an llm that can set an alarm and be a chatbot at the same time?
This does seem like an embarrassing fail, but even Google has not completed replacing Assistant with Gemini. There have also been lost functionality (maybe temporary) in the process.
they are not talking about perplexity; the endless rumor mill talks about perplexity. The same that has them buying everything from Disney to Porsche to Nike for decades.
Undercut the competitors by charging less. Apple can afford to run its product at a loss.
They don't really have much time to wait, they could be forced to allow default voice assistants and access to private APIs by the DOJ antitrust, the App Store Freedom Act, the Open Markets Act, if any of those come through then OpenAI and Gemini will quickly end up entrenched.
Isn't a larger concern that Tim "Services" Cook failed to skate where the puck was headed on this one? 15 years ago the Mac had Nvidia drivers, OpenCL support and a considerable stake in professional HPC. Today's Macs have none of that.
Every business has to make tradeoffs, it's just hard to imagine that any of these decisions were truly worthwhile with the benefit of hindsight. After the botched launch of Vision Pro, Apple has to prove their worth to the wider consumer market again.
Silicon Mac’s are great for running LLMs. Unified memory and memory bandwidth of the Max and Ultra processors is very useful in doing inference locally.
Great news, but entirely lost on commercial hyperscalers and much of the PC market. Apple's recalcitrance towards supporting Nvidia drivers basically killed their last shot at real-world rackmount deployment of Apple Silicon. Now you can go buy an ARM Grace CPU that does the same thing, but cheaper and with better software support.
You really can’t. NVIDIA’s arm chip still looks nerfed compared to apple’s offering, and…I can run 40GB sized LLMs on the plane with no internet…it’s not something that you can do with any other platform.
I imagine they recognize the need for increased memory bandwidth and will bump that up significantly -- they are well positioned to do so.
> Isn't a larger concern that Tim "Services" Cook failed to skate where the puck was headed on this one?
Doesn't somebody (not named Nvidia) need to make a serious profit on AI before we can say that Tim Cook failed?
OpenAI and Anthropic aren't anywhere close. Meta? Google? The only one I can think of might be Microsoft but they still refuse to break out AI revenue and expenses in the earnings reports. That isn't a good sign.
I certainly don't think that profit would be required. Many of the massive tech companies that exist today went through long periods of time were they focused on growth and brand no profits for many years even post IPO.
I won't pretend to know exactly how the AI landscape will look in the future, but at this point it's pretty clear that there's going to be massive revenue going to the sector, and Moore's law will continue to crank.
I see what you're saying though. In particular is first generation gigs data centers might be black holes of an investment, considering in the not too distant future AI compute will be fully commoditized and 10x cheaper.
Yeah, I think we're on the same page on this one.
"failed to skate where the puck was headed" assumes that we know where the puck is going to be. We don't.
Everyone is skating towards that same spot while Apple is over by the blue line practicing their swizzles. They sure look like they're doomed. But large groups of people have skated to the "wrong spot" thousands of times. That's the entire point that Gretzky was making with his quote. He's not big enough, strong enough, fast enough to get in that scrum. They're all fighting it out and the puck slides away. To him. All alone.
Maybe that is Apple, maybe it's not. I mean, they're still learning to skate while everyone else is playing hockey.
Their X/OpenGL support has also been in stasis for 10 years or more. There’s not enough money taking over for SGI to move their needle.
Don't abandon Intel Macs, then and call them Mac AI systems with NVIDIA chips. Sell them for more than the Apple Silicon Macs.
No one would buy slower hotter computers for more money. Most people who own Apple computers today are extremely satisfied with Apple silicon, and AI enthusiasts are an increasing large slice of those people (since there really isn’t anything else and getting a 3090/4099/5090 is still hard and expensive).
Macs are basically a dead business. The key is somehow creating the AI equivalent of an App Store or something
Dead business?
The Mac is something like 30 billion in revenue per year, and 10 billion in profit.
The entire "generative AI" "industry" is struggling to reach 30 billion in revenue even with their creative accounting (my free Perplexity that comes with Revolut is somehow counted at full price, even though I never paid anything, and I'm sure Revolut doesn't pay full price), and gross profit is deep in the negative.