I thank Tim Cook for this information. Till today I did not know the extent of Apple's commitment to or interest in doing frontier AI research.
I was leaning towards buying a Mac, but now I won't because I do what (little) I can to slow down AI.
Switching to Windows would also clearly be encouraging the AI juggernaut, so I will stay with Linux.
I understand your sentiment, but AI is the new internet -- despite the hype it's not going away.
The ability to have true personal AI agent that you would own would be quite empowering. Out of all the industry players I'd put Apple as the least bad option to have that happen with.
>Out of all the industry players I'd put Apple as the least bad option
To be the least bad option, Apple would need to publish a plan for keeping an AI under control so that it stays under control even if it undergoes a sharp increase in cognitive capability (e.g., during training) or alternatively a plan to prevent an AI's capability from ever rising to a level that requires the aforementioned control.
I haven't seen anything out of Apple suggesting that Apple's leaders understand that a plan of the first kind or the second kind is necessary.
Most people who have written about the topic in detail put Anthropic as the least bad option because out of all the groups with competitive offerings, their leadership has written in the most detail about the need for a plan and about their particular (completely inadequate IMHO) plan.
I myself put Google as the least bad option -- the slightly less awful option, to be precise -- with large uncertainty because Google wasn't pushing capabilities hard till OpenAI and Anthropic put it in a situation in which it either had to start pushing hard or risk falling so far behind it wouldn't be able to catch up. Consequently, I use Gemini as my LLM service. In particular, Google risked finding itself in a situation in which it cannot create a competitive offering because it doesn't have access to enough data collected from users of LLMs and generative AIs and cannot get enough data because it cannot attract users. While it was the leading lab, Google was proceeding slowly and at least one industry insider claims credibly that the slowness was deliberately chosen to reduce the probability of an AI catastrophe.
I must stress that no one has an adequate plan for avoiding an AI catastrophe while continuing to push capabilities, and IMHO no one is likely to devise one in time, so would be great if no one did any more frontier AI research at all till humanity itself becomes more cognitively capable.
Of the companies mentioned I believe apple is the only one that does not provide their own chat bot. If they aren’t opening an interface for open ended interaction with their AI tools I think your concern is much less relevant. I’m curious if you’d disagree though.
Siri is not AFIAK a competitive offering in the chatbot space, but it is a chatbot; is it not? I guess I just don't understand your argument.
Maybe this will help: AI labs have tried out 100s of different designs for AIs and if they aren't stopped (e.g., by the governments of the developed world) they are going to try out 1000s of additional designs. Most of us who worry about AI takeover or about human extinction caused by AI do not claim to be able to tell which design will be the first design capable of taking over or of extincting humanity -- even if we had complete access to the source code and the training data and we could ask the researchers behind the design questions. (The researchers do not know either IMHO.) But once an design has been widely deployed for many months, we know that that design is not the one that is going to take over. Gemini 2.5 for example has been widely deployed since January. It has been given plentiful access to very gullible people, very desperate people and plentiful compute resources. (When a customer asks an AI to write code, then runs that code without first understanding the code himself, that is giving the AI access to whatever compute resource the code gets run on.) If Gemini 2.5 were able to take over the world, it would have done so already. Ergo, I consider it morally permissible for me to use Gemini 2.5. Now Google is not going to stop with Gemini 2.5: it will continue to try out different designs, which is why I consider it my obligation to avoid helping Google, e.g., by giving it money, which is why so far I've stayed on the free tier of Gemini.
Gemini 2.5 is not very agenty: it does not learn continuously, it is extremely unlikely that its can work effectively towards any long-range plan or devise a plan that can withstand determined human opposition. So in the particular case of Gemini 2.5 and its competitors, we really didn't need many months of wide deployment to go by before we can conclude with high certainty that Gemini 2.5 is incapable of taking over the world. But most AI researchers and most leaders in AI labs consider the fact that the current crop of deployed AIs cannot learn continuously very well and cannot formulate and work towards long-range plans as deficiencies to be overcome.
Your phone is probably an Android or iPhone, it's funding AI research with your hard-earned money! Better smash it with a rock and eat the pieces.
I've never owned an Android phone or an iPhone.
You might enjoy the Aussie saying “pissing into the wind”.
It is pissing in the wind, but at least I'm not contributing to the catastrophic outcome by cooperating or doing business with Apple.
It's not my fault that the reality in which humanity find itself turned out to be more dangerous than almost anyone suspected. My only moral obligation is to do what I can to make the future turn out okay even though what I can do is very very little.
> because I do what (little) I can to slow down AI.
I think you're focusing on the wrong things. AI can be used in harmful ways, but not because they're outsmarting human beings despite all the cult-like hype. In fact, they don't need to be actually competent for the rich to take advantage of the tech in destructive ways. They just need to convince the public that they're competent enough so that they have an excuse to cut jobs. Even if AI does a poorer job, it won't matter if consumers don't have alternatives, which is unfortunately the case in many situations. We face a much bigger threat of data breaches from vibe coded apps than conscious robots manipulating humans through the Matrix.
Just look at Google support. It's a bunch of mindless robots that can kick you out of their platform on a whim. Their "dispute process" is another robot that passive-aggressively ragebaits you. [1][2] They're incompetent, yet it helps one of the richest companies in the world save money.
Also, let's not forget Google's AI flagged multiple desperate parents sharing medical pics of their kids to their doctors. Only when the media contacted them did a human being come out, only to falsely accuse the parents of being pedos. [3] People were harmed, and it's not because of competency.
Another greater concern is the ability of LLMs to mass-produce spam or troll content with minimal effort. It's a major threat to democracies all around the globe, and it turns out we don't need a superintelligence for demagogues to misuse it and cause harm.
There are more real concerns regarding AI other than the perpetually "just around the corner" superintelligence. What we need is a push for stronger regulatory protection for workers, consumers, and constituents. Not boycotting Macbooks because of AI.
[1]: https://news.ycombinator.com/item?id=26061935
[2]: https://news.ycombinator.com/item?id=23219427
[3]: https://news.ycombinator.com/item?id=32538805