This is the main "killer feature" I've personally experienced from GPT things: a much better contextual "search engine-ish" tool for combing through and correlating different internal data sources (slack, wiki, jira, github branches, etc).
AI code assistants have been a net neutral for me (they get enough idioms in C++ slightly incorrect that I have to spend a lot of time just reading the generated code thoroughly), but being able to say "tell me what the timeline for feature X is" and have it comb through a bunch of internal docs / tickets / git commit messages, etc, and give me a coherent answer with links is amazing.
This is partly why I believe OS makers, Apple, Microsoft, Google, have a huge advantage in the future when it comes to LLMs.
They control the OS so they can combine and feed all your digital information to an LLM in a seamless way. However, in the very long term, I think their advantage will go away because at some point, LLMs could get so good that you don't need an OS like iOS anymore. An LLM could simply become standalone - and function without a traditional OS.
Therefore, I think the advantage for iOS, Android, Windows will increase in the next few years, but less powerful after that.
An LLM is an application that runs on an operating system like any other application. That the vendor of the operating system has tied it to the operating system is purely a marketing/force-it-onto-your-device/force-it-in-front-of-your-face play. It's forced bundling, just like Microsoft did with Internet Explorer 20 years ago.
I predict that OpenAI will try to circumvent iOS and Android by making their own device. I think it will be similar to Rabbit R1, but not a scam, and a lot more capable.
They recently hired Jony Ive on a project - it could be this.
I think it'll be a long term goal - maybe in 3-4 years, a device similar to the Rabbit R1 would be viable. It's far too early right now.
Even if this is true (and I'm not saying it's not), they probably won't create their own OS. They'd be smarter to do what Apple did and clone a BSD (or similar) rather than start afresh.
Would be extremely surprising if it were anything other than an Android fork. The differentiator is gonna be the LLM, always on listening and the physical interface to it.
You're just burning money bothering to rewrite the rest of the stack when off the shelf will save you years.
The LLM would become the OS.
An LLM cannot "become" an OS. It can have an OS added to it, for sure, but that's a different thing. LLMs run on top of a software stack that runs on top of an OS. Incorporating that whole stack into a single binary does not mean it "becomes" an OS.
And the point stands: you would not write a new OS, even to incorporate it into your LLM. You'd clone a BSD (or similar) and start there.
I don't think you're getting the main point. The only application that this physical device would run is ChatGPT (or some successor). You won't be able to install other apps on it like a normal OS. Everything you do is inside this LLM.
Underneath, it can be Linux, BSD, Unix, or nothing at all, whatever. It doesn't matter. That's not important.
OS was just a convenient phrase to describe this idea.
I got your main point from the first message, but still don't like redefining terminology like OS to mean what you did.
Think of iOS and everything that it does such as downloading apps, opening apps, etc. Replace all of that with ChatGPT.
No need to get to the technicals such as whether it's UNIX or Linux talking to the hardware.
Just from a pure user experience standpoint, OpenAI would become iOS.
I don't think "OS" means anything definitive. It's not 1960. Nowadays, it's a thousand separate things stuck together.
I think what you mean is "Desktop" not "OS". You're just replacing all the windows, menus and buttons with a chat interface.
The LLM can't abstract PCI, USB, SATA etc from itself.
What counts as an OS is subjective. The concept has always been a growing snowball.
This is a similar situation to the view that the web would replace operating systems. All we'd need is a browser.
I don't think AI is ultimately even an application, it's a feature we will use in applications.
> This is a similar situation to the view that the web would replace operating systems. All we'd need is a browser.
well, that's not a false statement. As much as I might dislike it, the raise of the web and web applications have made the OS themselves irrelevant for a significant number for tasks.
I’m not even sure if they can make a website that takes text input to an executable and dumps the output.
even then, the llm cannot possibly be a standalone os. For one thing, it cannot execute loops. So even something as simple as enumerating hardware at startup is impossible.
Good comment. From Apple's point of view, AI could be a disruptive innovation: they've spent billions making extremely user-friendly interfaces, but that could become irrelevant if I can just ask my device questions.
But I think there will be a long period when people want both the traditional UI with buttons and sliders, and the AI that can do what you ask. (Analogy with phone keyboards where you can either speech-to-text, or slide to type, or type individual letters, or mix all three.)
I cannot tell you how much this echoes what people were saying during the dot com days :) Of course back then it was browsers and not LLMs. Looking back, people were both correct about this, yet we’re still having the same conversation about replacing the OS cartel.
>they get enough idioms in C++ slightly incorrect
this is part of why I stay in python when doing ai-assisted programming; there's so much training information out there for python and I _generally_ don't care about if its slightly off-idiom, its still probably fine.
Yea, I was thumbs-down on ai-assisted programming because when I tested it out, I tried it by adding things to my existing C and C++ projects, and its suggestions were... kind of wild. Then, a few months later I gave it another chance when I was writing some Python and was impressed. Finally, I used it on a new-from-blank-text-file Rust project and was pretty much blown away.
The best I have ever seen were obscure languages with very strong type safety. Some researcher at a sibling org to my own told me to try it with the Lean language, and it basically gave flawless suggestions.
I'm guessing this is because the only training material was blogs from uber-nerdy CS researchers on a language where "mistakes" are basically impossible to write, and not a bunch of people flailing on forums asking about hello world-ish stuff and segfaulting examples.
As someone who doesn't generally program, it was pretty good at getting me an init.lua set up for nvim with a bunch of plugins and some functions that would have taken me ages to do by hand. That said...it still took a day or two of working with it and troubleshooting everything, and while it's been reliable so far, I worry that it's not exactly idiomatic. I don't know enough to really say.
What it's really good at is taking my description of something and pointing me in the right direction to do my own research.
(two things that helped me with getting decent code were to describe the problem and desired solution, followed by a "Does that make sense?". This seems to get it to restate the problem itself and produce better solutions. The other thing was to copy the output into a fresh session, ask for a description of what the code does and what improvements could be made)
The downside of this nvim solution is the same downside as both pasting big blobs of ai code into a repo, and, pasting big vim configs you find online into your vimrc: inability to explain the pasted code.
When you need something fast for whatever reason sure, but later when you want to tweak or add something, you'll have to finally sit down and learn basically the whole thing or at least a major part of it to do so anyway. Imo it's better to do that from the start but sometimes that's not always ideal.
When I’ve used AI for writing shell scripts it used a lot of syntax that I couldn’t understand. So then I took the time to ask it to walk me through the parts that I didn’t understand. This took longer than blindly pasting what it generated, but still less time than it would have using search to learn to write my own script. With search, a lot of time is spent guessing the right search term. With chat, assuming it generated a reasonable answer (I know: a big assumption!), my follow-up questions can directly reference aspects of the generated code.
having something explained to me has never helped me retain the information. That only happens if i spend the time actually figuring out stuff myself.
Not saying that it’s a better way, but I started with vim by copying someone conf (on Github), removing all extraneous stuff, then slowly familiarizing myself with the rest. Then it was a matter of reading the docs when I wanted some configuration. I believe the first part is faster than dealing with an LLM, especially when dealing with an unfamiliar software.
I agree with this approach generally, but I needed to use some lua plugins to do something specific fairly quickly, and didn't feel like messing around with it for weeks on end to get it just right.
My data science friend tells me it's really good at writing bad pandas code because it's seen so much bad pandas code.
At the end of the day, it depends where you are in the hierarchy. Having it write code for me on a hobby project in react that's bad but works is one thing. I'm having a lot of fun with that. Having it write bad code for me professionally is another thing though. Either way, there's no going back to before ChatGPT, just like there's no going back to before Stack Overflow or Google. Or the Internet.
Wouldn't AI be worse at Rust than at C++ given the amount of code available in the respective languages?
Maybe this is a case where more training data isn’t better. There is probably a lot of bad/old C++ out there in addition to new/modern C++, compared to Rust which is relatively all modern.
Yes, I think that's it. There is a lot of horrible C++ code out there, especially on StackOverflow where "this compiled for me" sometimes ends up being the accepted answer. There are also a lot of ways to use C++ poorly/wrong without even knowing it.
Companies are going to have to do a lot less gatekeeping and siloing of data for this to really work. The companies that are totally transparent even internally are few and far between in my experience.