"GPT‑5.4 interprets screenshots of a browser interface and interacts with UI elements through coordinate-based clicking to send emails and schedule a calendar event."

They show an example of 5.4 clicking around in Gmail to send an email.

I still think this is the wrong interface to be interacting with the internet. Why not use Gmail APIs? No need to do any screenshot interpretation or coordinate-based clicking.

The vast majority of websites you visit don’t have usable APIs and very poor discovery of the those APIs.

Screenshots on the other hand are documentation, API, and discovery all in one. And you’d be surprised how little context/tokens screenshots consumer compared to all the back and forth verbose json payloads of APIs

>The vast majority of websites you visit don’t have usable APIs and very poor discovery of the those APIs.

I think an important thing here is that a lot of websites/platforms don't want AIs to have direct API access, because they are afraid that AIs would take the customer "away" from the website/platform, making the consumer a customer of the AI rather than a customer of the website/platform. Therefore for AIs to be able to do what customers want them to do, they need their browsing to look just like the customer's browsing/browser.

That's true, and it's always been like that, which is why the comment that AI should be using APIs is already dead in the water. In terms of gating a websites to humans by not providing APIs, that is quickly coming to a close.

Also the fact that they don't want automated abuse. At this point a lot of services might just go app only so they can have a verified compute environment that is difficult to bot.

It feels like building humanoid robots so they can use tools built for human hands. Not clear if it will pay off, but if it does then you get a bunch of flexibility across any task "for free".

Of course APIs and CLIs also exist, but they don't necessarily have feature parity, so more development would be needed. Maybe that's the future though since code generation is so good - use AI to build scaffolding for agent interaction into every product.

I don't see how an API couldn't have full parity with a web interface, the API is how you actually trigger a state transition in the vast majority of cases

Lots of services have no desire to ever expose an API. This approach lets you step right over that.

If an API is exposed you can just have the LLM write something against that.

A model that gets good at computer use can be plugged in anywhere you have a human. A model that gets good at API use cannot. From the standpoint of diffusion into the economy/labor market, computer use is much higher value.

Same reason why Wikipedia deals with so many people scraping its web page instead of using their API:

Optimizations are secondary to convenience

I think the desire is that in the long-term AI should be able to use any human-made application to accomplish equivalent tasks. This email demo is proof that this capability is a high priority.

A world where AIs use APIs instead of UIs to do everything is a world where us humans will soon be helpless, as we'll have to ask the AIs to do everything for us and will have limited ability to observe and understand their work. I prefer that the AIs continue to use human-accessible tools, even if that's less efficient for them. As the price of intelligence trends toward zero, efficiency becomes relatively less important.

This opens up a new question: how does bot detection work when the bot is using the computer via a gui?

On it's face, I'm not sure that's a new question. Bots using browser automation frameworks (puppeteer, selenium, playwright etc) have been around for a while. There are signals used in bot detection tools like cursor movement speed, accuracy, keyboard timing, etc. How those detection tools might update to support legitimate bot users does seem like an open question to me though.

APIs have never been a gift but rather have always been a take-away that lets you do less than you can with the web interface. It’s always been about drinking through a straw, paying NASA prices, and being limited in everything you can do.

But people are intimidated by the complexity of writing web crawlers because management has been so traumatized by the cost of making GUI applications that they couldn’t believe how cheap it is to write crawlers and scrapers…. Until LLMs came along, and changed the perceived economics and created a permission structure. [1]

AI is a threat to the “enshittification economy” because it lets us route around it.

[1] that high cost of GUI development is one reason why scrapers are cheap… there is a good chance that the scraper you wrote 8 years ago still works because (a) they can’t afford to change their site and (b) if they could afford to change their site changing anything substantial about it is likely to unrecoverably tank their Google rankings so they won’t. A.I. might change the mechanics of that now that you Google traffic is likely to go to zero no matter what you do.

You can buy a Claude Code subscription for $200 bucks and use way more tokens in Claude Code than if you pay for direct API usage. Anthopic decided you can't take your Auth key for Claude code and use it to hit the API via a different tool. They made that business decision, because they thought it was better for them strategically to do that. They're allowed to make that choice as a business.

Plenty of companies make the same choice about their API, they provide it for a specific purpose but they have good business reasons they want you using the website. Plenty of people write webcrawlers and it's been a cat and mouse game for decades for websites to block them.

This will just be one more step in that cat and mouse game, and if the AI really gets good enough to become a complete intermediary between you and the website? The website will just shutdown. We saw it happen before with the open web. These websites aren't here for some heroic purpose, if you screw their business model they will just go out of business. You won't be able to use their website because it won't exist and the website that do exist will either (a) be made by the same guys writing your agent, and (b) be highly highly optimized to get your agent to screw you.

> AI is a threat to the “enshittification economy” because it lets us route around it.

This is prescient -- I wonder if the Big Tech entities see it this way. Maybe, even if they do, they're 100% committed to speedrunning the current late-stage-cap wave, and therefore unable to do anything about it.

They are not a single thing.

Google has a good model in the form of Gemini and they might figure they can win the AI race and if the web dies, the web dies. YouTube will still stick around.

Facebook is not going to win the AI race with low I.Q. Llama but Zuck believed their business was cooked around the time it became a real business because their users would eventually age out and get tired of it. If I was him I'd be investing in anything that isn't cybernetic let it be gold bars or MMA studios.

Microsoft? They bought Activision for $69 billion. I just can't explain their behavior rationally but they could do worse than their strategy of "put ChatGPT in front of laggards and hope that some of them rise to the challenge and become slop producers."

Amazon is really a bricks-and-mortar play which has the freedom to invest in bricks-and-mortar because investors don't think they are a bricks-and-mortar play.

Netflix? They're cooked as is all of Hollywood. Hollywood's gatekeeping-industrial strategy of producing as few franchise as possible will crack someday and our media market may wind up looking more like Japan, where somebody can write a low-rent light novel like

https://en.wikipedia.org/wiki/Backstabbed_in_a_Backwater_Dun...

and J.C. Staff makes a terrible anime that convinces 20k Otaku to drop $150 on the light novels and another $150 on the manga (sorry, no way you can make a balanced game based on that premise!) and the cost structure is such that it is profitable.

> AI is a threat to the “enshittification economy” because it lets us route around it.

I am not sure about that. We techies avoid enshittification because we recognize shit. Normies will just get their syncopatic enshittified AI that will tell them to continue buying into walled gardens.

Lowest common denominator.

Because the web and software more generally if full of not APIs and you do, in fact, need the clicking to work to make agents work generally

not everything has an API, or API use is limited. some UIs are more feature complete than their APIs

some sites try to block programmatic use

UI use can be recorded and audited by a non-technical person

The ideal of REST, the HTML and UI is the API.

[deleted]

I guess a big chunk of their target market won't know how to use APIs.

One could argue that LLMs learning programming languages made for humans (i.e. most of them) is using the wrong interface as well. Why not use machine code?

Why would human language by the wrong interface when they're literally language models? Why would machine code be better when there is probably magnitude less of training material with machine code?

You can also test this yourself easily, fire up two agents, ask one to use PL meant for humans, and one to write straight up machine code (or assembly even), and see which results you like best.

> One could argue that LLMs learning programming languages made for humans (i.e. most of them) is using the wrong interface as well.

Then go ahead and make an argument. "Why not do X?" is not an argument, it's a suggestion.

because they are inherently text based as is code?

But they are abstractions made to cater to human weaknesses.