This is actually a huge deal.
As someone building AI SaaS products, I used to have the position that directly integrating with APIs is going to get us most of the way there in terms of complete AI automation.
I wanted to take at stab at this problem and started researching some daily busineses and how they use software.
My brother-in-law (who is a doctor) showed me the bespoke software they use in his practice. Running on Windows. Using MFC forms.
My accountant showed me Cantax - a very powerful software package they use to prepare tax returns in Canada. Also on Windows.
I started to realize that pretty much most of the real world runs on software that directly interfaces with people, without clearly defined public APIs you can integrate into. Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.
Boy was I wrong.
I am glad they did this, since it is a powerful connector to these types of real-world business use cases that are super-hairy, and hence very worthwhile in automating.
This has existed for a long time, it's called "RPA" or Robotic Process Automation. The biggest incumbent in this space is UiPath, but there are a host of startups and large companies alike that are tackling it.
Most of the things that RPA is used for can be easily scripted, e.g. download a form from one website, open up Adobe. There are a lot of startups that are trying to build agentic versions of RPA, I'm glad to see Anthropic is investing in it now too.
RPA has been a huge pain to work with.
It's almost always a framework around existing tools like Selenium that you constantly have to fight against to get good results from. I was always left with the feeling that I could build something better myself just handrolling the scripts rather than using their frameworks.
Getting Claude integrated into the space is going to be a game changer.
Most RPA work is in dealing with errors and exceptions, not the "happy path". I don't see how Claude's Screen Agent is going to work out there - what do you do when an error pops up and you need to implement specific business logic how to respond? How about consistency over many executions, and enterprise accounts. You want a centralized way to control agent behavior. Scripting based RPA is also much faster and cheaper to run, and more consistent.
Maybe Anthropic should focus on building a flexible RPA primitive we could use to make RPA workflows with, like for example extracting values from components that need scrolling, selecting values from long drop-down menus, or handling error messages under form fields.
I agree with your post.
Isn't this most programming? I always chuckle when a junior hire looks at my code and says: "It is mostly error checking."100% this. I am using the open source Ui.vision to automate some business tasks. Works well, but only 10% of the work is for automating the main workflow, 90% of the work goes into error and edge case handling (e. g. Internet down, website (to scrape data from) down, some input data has typos or the wrong date format, etc).
A human can work around all these error cases once she encounters them. Current RPA tools like Uipath or ui.vision need explicit programming for every potential situation. And I see no indication that Claude is doing any better than this.
For starters, for visual automation to work reliably the OCR quality needs to improve further and be 100% reliable. Even in that very basic "AI" area, Claude, ChatGPT, Gemini are good, but not good enough yet.
I can see it now, Claude generating expect scripts. 1994 and 2024 will be fully joined.
The big thing I expect at the next level is in using Claude to first generate UI-based automation based on an end user's instructions, then automatically defining a suite of end-to-end tests, confirming with the user "is this how it should work?", and then finally using this suite to reimplement the flow from first principles.
I know we're still a bit far from there, but I don't see a particular hurdle that strikes me as requiring novel research.
But does it do any better at soliciting the surprise requirements from the user, who after confirming that everything works, two months later reports a production bug because the software isn't correctly performing the different reqirements on the first Tuesday of each quarter that you never knew about.
I once had an executive ask to start an incident because he was showing a client the app and a feature that he wanted that had never been spec’d didn’t exist.
So basically, Tog's Paradox in action?
I was going to comment about this. Worked at a place that had a “Robotics Department”, wow I thought. Only to find out it was automating arcane software.
UI is now much more accessible as API. I hope we don’t start seeing captcha like behaviour in desktop or web software.
Wow, that's a grim potential future. I can already see software producers saying that e.g. the default license only allows operation of our CAD designer software by a human operator. If you want to make your bot use it in an automated way you must by the bot license which costs 10x more.
Exactly. I have been wondering for a while how GenAI might upend RPA providers guess this might be the answer.
I've been wondering the same and started exploring building a startup around this idea. My analysis led me to the conclusion that if AI gets even just 2 orders of magnitude better over the next two years, this will be "easy" and considered table stakes. Like connecting to the internet, syncing with cloud or using printer drivers
I don't think there will be a very big place for standalone next gen RPA pure plays. it makes sense that companies that are trying to deliver value would implement capabilities this. Over time, I expect some conventions/specs will emerge. Either Apple/Google or Anthropic/OpenAI are likely to come up with an implementation that everyone aligns on
In other words, I agree
> if AI gets even just 2 orders of magnitude better over the next two years
You realize this means '100 times better', right?
yes, thanks for pointing out the assumption here. I'm not sure how to quantify AI improvements and tbh not really up to speed on quantifiable rate of improvement from 4 to 4o to o1
100 times better seems to me in line with the bet that's justifying $250B per annum in Cap Ex (just among hyperscalers) but curious how you might project a few years out?
Having said that, my use of 100x better here applies to 100x more effective at navigating use cases not in training set, for example, as opposed to doing things that are 100x more awesome or doing them 100x more efficiently (though seemingly costs, context window and token per unit of electricity seem to continue to improve quickly)
I would think that such an increase in AI capability would basically be AGI...
Just to give a few comparisons, the following things are two orders of magnitude apart:
1. The force felt by a mosquito landing on your arm and getting punched by Mike Tyson in his prime
2. A firecracker exploding and a stick of dynamite exploding
3. The heat from a candle and the heat from a blowtorch
4. The sound from a whisper and the sound from jet engine
UiPath can't figure out how to make a profitable business since 2005 and we are nearing the end of this hype cycle. I am not so sure this will lead anywhere. I am a former investor in UiPath.
Attempts at commercialization in technology seem to often happen twice. First we get the much-hyped failure, and only later we get the actual thing that was promised.
So many examples come to mind… RealVideo -> YouTube, Myspace -> Facebook, Laserdisc -> DVD, MP3 players -> iPod…
UiPath may end up being the burned pancake, but the underlying problem they’re trying to address is immensely lucrative and possibly solvable (hey if we got the Turing test solved so quickly, I’m willing to believe anything is possible).
I love the “burned pancake” euphemism. Totally going to borrow this.
It didn’t help that UIPath forced a subscription model and “cloud orchestrator” on all users and many of which needed neither. They got greedy. We ditched it.
My impression is that actually solving this classic RPA problem with AI is exactly the raison d'etre of AI21Labs with their task specific models[1]. They don't have the biggest or best general purpose LLM, but they have an excellent model that's been pre-trained on specific types of business data and also made available for developers using simple APIs & "RPA-style" interfaces.
[1] https://www.ai21.com/use-cases
Honestly, this is going to be huge for healthcare. There's an incredible amount of waste due to incumbent tech making interoperability difficult.
Hopefully.
I’ve implemented quite a few RPA apps and the struggle is the request/response turn around time for realtime transactions. For batch data extract or input, RPA is great since there’s no expectation of process duration. However, when a client requests data in realtime that can only be retrieved from an app using RPA, the response time is abysmal. Just picture it - Start the app, log into the app if it requires authentication (hope that the authentication's MFA is email based rather than token based, and then access the mailbox using an in-place configuration with MS Graph/Google Workspace/etc), navigate to the app’s view that has the data or worse, bring up a search interface since the exact data isn’t known and try and find the requested data. So brittle...
It is.
CTO of healthcare org here.
I just put a hold on a new RPA project to keep an eye on this and see how it develops.
According to their docs, Anthropic will sign a BAA.
Out of curiosity, how are high risk liability enviroments like yours coming to terms with the non-deterministic nature of models like these? Eg. the non-zero chance that it might click a button it *really* shouldn't as demonstrated in the failure demo.
Technical director at another company here: We have humans double-check everything, because we're required by law to. We use automation to make response times faster, or to do the bulk of the work and then just have humans double-check the AI. To do otherwise would be classed as "a software medical device", which needs documentation out the wazoo, and for good reason. I'm not sure you could even have a medical device where most of your design doc is "well I just hope it does the right thing, I guess?".
Sometimes, the AI is more accurate or safer than humans, but it still reads better to say "we always have humans in the loop". In those cases, we reap the benefits of both: Use the AI for safety, but still have a human fallback.
I'm curious, what does your human verification process look like? Does it involve a separate interface or a generated report of some kind? I'm currently working on an tool for personal use, that records actions and triggers them at later stage on when specified event occurs. For verification, generating a CSV report after the process is complete and backing it up with screen recordings.
It's a separate interface where the output of the LLM is rated for safety, and anything unsafe opens a ticket to be acted upon by the medical professionals.
I don't know yet. We may not do it.
We haven't deployed a model like this, it's new.
I've done a ton of various RPAs over the years, using all the normal techniques, and they're always brittle and sensitive to minor updates.
For this, I'm taking a "wait and see" approach. I want to see and test how well it performs in the real world before I deploy it, and wait for it to come out of beta so Anthropic will sign a BAA.
The demo is impressive enough that I want to give the tech a chance to mature before my team and I invest a ton of time into a more traditional RPA.
At a minimum, if we do end up using it, we'll have solid guard rails in place - it'll run on an isolated VM, all of its user access will be restricted to "read only" for external systems, and any content that comes from it will go through review by our nurses.
AWS Bedrock deployed models, which include Anthropic Claude models, claim HIPAA compliance eligibility.
What is a BAA?
https://www.techtarget.com/healthtechsecurity/feature/What-I... agreement that lets a business associate handle HIPAA-protected data.
Healthcare has the extra complication of HIPAA / equivalent local laws, and institutions being extremely unwilling to process patient data on devices they don't directly control.
I don't think this is going to work in that industry until local models get good enough to do it, and small enoguh to be affordable to hospitals.
Hospitals use O365, there are HIPAA-compliant editions of any prominent cloud service.
That industry only thinks it controls its devices. Crowdstrike showed there are many bridges over that moat.
Their concern is compliance, not security.
Based on Tog's paradox (https://news.ycombinator.com/item?id=41913437) the moment this becomes easy, it will become hard again with extra regulation and oversight and documentation etc.
Similarly I expect that once processing/searching laws/legal records becomes easy through LLMs, we'll compensate by having orders of magnitude more laws, perhaps themselves generated in part by LLMs.
> There's an incredible amount of waste due to incumbent tech making interoperability difficult.
So the solution to that is to add another layer of complex AI tech on top of it?
Well nothing else we've tried has worked.
I work with healthcare in the UK. There’s a promising approach called CSV files which is revolutionising some of my workflows :)
We’ll see. Having worked in this space in the past, the technical challenges are able to overcome today with no new technology: its a business sales and regulation challenge more than a tech one.
Sometimes.
In my case I have a bunch of nurses that waste a huge amount of time dealing with clerical work and tech hoops, rather than operating at the top of their license.
Traditional RPAs are tough when you're dealing with VPNs, 2fa, remote desktop (in multiple ways), a variety of EHRs and scraping clinical documentation from poorly structured clinical notes or PDFs.
This technology looks like it could be a game changer for our organization.
True, 2FA and all these little details that exist now have made this automation quite insanely complicated. It is of course necessary that we have 2FA etc, but there is huge potential in solving this I believe.
From a security standpoint, what's considered the "proper" way of assigning a bot access based on a person's 2FA? Would that be some sort of limited scope expiring token like GitHub's fine-grained personal access tokens?
Security isn't the only issue here. There are more and less "proper" ways of giving bots access to a system. But the whole field of RPA exists in large part because the vendors don't want you to access the system this way. They aren't going to give you a "proper" way of assigning bot access in a secure way, because they explicitly don't want you to do it in the first place.
I don't know, I feel like it has to be some sort of near field identity proof. E.g. as long as you are wearing a piece of equipment to a physical computer near you can run all those automations for you, or similar. I haven't fully thought what the best solution could be or whether someone is already working on it, but I feel like there has to be something like that, which would allow you better UX in terms of access, but security at the same time.
So maybe like an automated ubikey that you can opt in to a nearby computer to have all the access. Especially if working from home, you can set it at a state where if you are in 15m radius of your laptop it is able to sign all access.
Because right now, considering amount of tools and everything I use and with single sign on, VPN, Okta, etc, and how slow they seem to be, it's extremely frustrating process constantly logging in to everywhere, and it's almost like it makes me procrastinate my work, because I can't be bothered. Everything about those weird little things is absolutely terrible experience, including things like cookie banners as well.
And it is ridiculous, because I'm working from home, but frustratingly high amount of time is spent on this bs.
A bluetooth wearable or similar to prove that I'm nearby essentially, to me that seems like it could alleviate a lot of safety concerns, while providing amazing dev/ux.
That's a really cool idea.
The main attack vector would then probably be some man-in-the-middle intercepting the signal from your wearable, which leads me to wonder whether you could protect yourself by having the responses valid for only an extremely short duration, e.g. ~1ms, such that there's no way for an attacker to do anything with the token unless they gain control over compute inside your house.
Maybe we could build an authenticator as part of the RPA tool or bot client itself. This way, the bot could generate time-based one-time passwords (TOTPs).
Precisely why I built therapedia.io
I agree that at the business contract level, it's more about sales and regulations than tech. But in my experience working close to minimum wage white-collar jobs, about 1 in 4 of my coworkers had automated most of their job with some unholy combination of VBScript, Excel wizardry, AutoHotKey, Selenium, and just a bit of basic Python sprinkled in; IT, security, and privacy concerns notwithstanding. Some were even dedicated enough to pay small amounts out-of-pocket for certain tools.
I'd bet that until we get the risks whittled down enough for larger organizations to adopt this on a wide scale, the biggest user group for AI automation tools will be at the level of individual workers who are eager to streamline their own tasks and aren't paid enough to care about those same risks.
Or you'll start getting a captcha while trying to pump insulin
(Shrug) AI is now better at CAPTCHAs than I am, so bring it on I guess.
Is "AI SaaS bro discovers not everything has a JSON API" the new "startup bro just reinvented a bus"?
Good one.
> Being in the SaaS space makes you believe that everyone ought to have client-server backend APIs etc.
FWIW, looking at it from end-user perspective, it ain't much different than the Windows apps. APIs are not interoperability - they tend to be tightly-controlled channels, access gated by the vendor and provided through contracts.
In a way, it's easier to make an API to a legacy native desktop app than it is to a typical SaaS[0] - the native app gets updated infrequently, and isn't running in an obstinate sandbox. The older the app, the better - it's more likely to rely on OS APIs and practices, designed with collaboration and accessibility in mind. E.g. in Windows land, in many cases you don't need OCR and mouse emulation - you just need to enumerate the window handles, walk the tree structure looking for text or IDs you care about, and send targeted messages to those components.
Unfortunately, desktop apps are headed the same direction web apps are (increasingly often, they are web apps in disguise), so I agree that AI-level RPA is a huge deal.
--
[0] - This is changing a bit in that frameworks seem to be getting complex enough that SaaS vendors often have no clue as to what kind of access they're leaving open to people who know how to press F12 in their browsers and how to call cURL. I'm not talking bespoke APIs backend team wrote, but standard ones built into middleware, that fell beyond dev team's "abstraction horizon". GraphQL is a notable example.
Basically, if it means companies can introduce automation without changing anything about the tooling/workflow/programs they already use, it's going to be MASSIVE. Just and install and a prompt and you've already automated a lengthy manual process - awesome.
Companies are going to install an AI inside their own proprietary systems full of proprietary and confidential data and PII about their customers and prospects and whatnot, and let it run around and click on random buttons and submit random forms?
Really??!? What could possibly go wrong.
I'm currently trying to do a large ORC project using Google Vision API, and then Gemini 1.5 Pro 002 to parse and reconstruct the results (taking advantage, one hopes, of its big context window). As I'm not familiar with Google Vision API I asked Gemini to guide me in setting it up.
Gemini is the latest Google model; Vision, as the name implies, is also from Google. Yet Gemini makes several egregious mistakes about Vision, gets names of fields or options wrong, etc.
Gemini 1.5 "Pro" also suggests that concatenating two json strings produces a valid json string; when told that's unlikely, it's very sorry and makes lots of apologies, but still it made the mistake in the first place.
LLMs can be useful when used with caution; letting one loose in an enterprise environment doesn't feel safe, or sane.
LLMs can't reason, or can't reason logically to be precise; what they are really good at is recalling.
So if you want accurate results on writing code you need to put all the docs into the input and THEN ask for your question. So download all docs on Vision, put them in the Gemini prompt and ask your question or code on how to use Vision, and you'll get much closer to truth
Have you tried any others? From what I have tried Gemini makes the most mistakes out of all.
I have tried many others for many other things (via OpenRouter) but I have never compared LLMs on the exact same task; it's confusing enough with one engine... ;-)
Sonnet 3.5 for coding is fine but makes "basic" mistakes all the time. Using LLMs is at times like dealing with a senior expert suffering from dementia: it has arcane knowledge of a lot of things but suddenly misses the obvious that would not escape an intern. It's weird, really.
That's exactly it.
I've been peddling my vision of "AI automation" for the last several months to acquaintances of mine in various professional fields. In some cases, even building up prototypes and real-user testing. Invariably, none have really stuck.
This is not a technical problem that requires a technical solution. The problem is that it requires human behavior change.
In the context of AI automation, the promise is huge gains, but when you try to convince users / buyers, there is nothing wrong with their current solutions. Ie: There is no problem to solve. So essentially "why are you bothering me with this AI nonsense?"
Honestly, human behavior change might be the only real blocker to a world where AI automates most of the boring busy work currently done by people.
This approach essentially sidesteps the need to have effect a behavior change, at least in the short-term while AI can prove and solidify its value in the real-world.
There's a huge huge gap between "coaxing what you want out of it" and "trusting it to perform flawlessly". Everybody on the planet would use #2, but #1 is just for enthusiasts.
AI is squarely #1. You can't trust it with your credit card to order groceries, or to budget and plan and book your vacation. People aren't picking up on AI because it isn't good enough yet to trust - you still have the burden of responsibility for the task.
Siri, Alexa and Amazon Dash illustrate this well. I remember everyones excitement and massive investment about these, and we all know how that turned out. I'm not sure how many times we'll need to relearn that unless an automation works >99% of the time AND fails predictably, people don't use it for anything meaningful.
I think there is a large pool of near minimum-wage white collar workers who wouldn't care about that difference when it comes to executing on their jobs. These are the folks who are already using VBScript, AutoHotKey, Excel wizardry, etc. to automate large parts of their job regardless of any risks and will continue to use these new tools for similar purposes.
There’s nothing to gain for anyone there. Workers will lose their jobs, and managers will lose their reports.
Of course, but they'll go bankrupt if they don't adapt. Just like mom&pop cornerstores disappeared or any other large scale automation. Loom, cars, automated checkout in supermarkets etc. There will be resistance but the market will play it out. Similarly how taxi companies have started making apps after Uber got successful or local restaurants reluctantly made websites and added themselves to Google maps.
Nobody likes to change a system where they already have their own little comfortable spot and figured it out and just want to seep in the lukewarm there until retirement. Fully understandable. But at least in the private sector this will not save them.
Yeah this will be a true paradigm shift
Talking about ancient Windows software... Windows used to have an API for automation in the 2000s (I don't know if it still does). I wrote this MS Access script that ran and moved the cursor at exactly the pixel coordinates where buttons and fields were positioned in a GUI that we wanted to extract data from, in one of my first jobs. My boss used to do this manually. After a week he had millions of records ready to query in Access. You can imagine how excited he was. Was a fun little project and pretty hilarious to see the cursor moving fast AF around the screen like it was possessed. PS: you could screw up the script run pretty easily by bumping into the mouse of that pc.
Still present. VB and VB Script would do this by using mouse move to Window handles which were discovered using Spy++. You can do with C# or AutoIT these days.
PowerShell has some amazing capabilities.
Really good software has automation built in, in the form of macros/scripting.
One of the reasons my bash terminal is more effective than point and click is the easy of automation of routine tasks: from history and aliases to scripting.
Software interop is trickier as it doesn't so much depend on being able to send messages from one piece of software to another, it's you need an agreed format of those messages ( whether they be expressed in the payload of an API, or a UI specific stream of point and clicks ).
I tried once to integrate with software used by pharmacists in Australia (which costs a fortune and looks straight out of the 90's). Even though they have a SQL database with all information about everything, the DB is encrypted and they provide no way to access it. I tried contacting the company to learn how I could integrate with it but they offered no help. It's not in their interest to help upcoming competitors and they claim they cannot provide the encryption key as that's for protecting the customer's privacy, forgetting that the pharmacists already have access to all data through their software. But without a way to automatically extract information from the DB, there's no way to migrate to new software as no pharmacist would want to use new software if it cannot "import" the data they need.
It's clear that the reason there's no competition in the market is that the companies (I think it's literally one company) selling this software want it that way, and legislation is yet to catch up with the way they ensure their monopoloy.
I'm a bit skeptical about this working well enough to handle exceptions as soon as something out of the ordinary occurs. But it seems this could work great for automated testing.
Has anyone tried asking "use computer" to do "Please write a selenium/capybara/whatever test for filling out this form and sending it?"
That would take away some serious drudge work. And it's not a big problem if it fails, contrary to when it makes a mistake in filling out a form in an actual business process.
Momentic (W24) is doing this! No affiliation, but they've made some solid progress https://momentic.ai/
LLM's are enabling a reimagination of UI. Wheres the biggest opportunity in UI that hasn't kept up to date? legacy and regulated software in the long tail. Thats the disruption opportunity for LLM's.
Imagine a Banking website. It has lot of predefined flows of what can be achieved. These flows have steps arranged in a dependency graph. From the server side, a llm can ask users for inputs to satisfy the dependencies of the task which user wants to accomplish. We will have intuitive UI interfaces in all languages of the world.
But if it's a predefined list of flows, you can just throw Selenium/Puppeteer/Playwright/whatever other automation tool at it, rather than relying on an unstable AI that will do different things every time.
This is just a solution in search of a problem. AIs aren't reliable enough if the content changes constantly since it'll just click on the "close my account" button for no reason half the time, while a legacy website with no changes is much easier to program a tool like Selenium around than relying on the AI that will still be making random choices half the time.
I think you are confusing client side automation with server side customization (out of the box internationalization, if a user can't understand complex technical terms, llm can explain them in simple terms with examples built on the fly to illustrate the point) which was my point. I was talking about the future of UI itself.
Not to mention software like peoplesoft, SAP and servicenow. Absolute shit interfaces that employees have to deal with day in and day out.
Now, an in-house tool built on top of this Anthropic API can save hours of drudgery. I can already see sales teams smiling at the new 'submit your expense report' button.
I think it was pretty clear from the beginning that the whole AI thing is going to be winner-takes-all.
If you're in some niche doing AI development work, you are going to be outcompeted by more generalized AI at some point.
With one big exception: your general AI might dominate the business case, but my specialized one can craft raw packets. I’m the one who names the price, not you.
Absolutely! This reminds me of the humanoid robots vs specialized machines debate.
This is pretty similar to the argument for why humanoid robots will be a big deal. The physical world is also built mostly for humans, so having an autonomous agent that can interact with it is huge.
You don’t know for a fact that those two specific packages don’t have supported APIs. Just because the user doesn’t know of any API doesn’t mean none exists. The average accountant or doctor is never going to even ask the vendor “is there an API” because they wouldn’t know what to do with one if there was.
If they're accessible to screen readers they have one. Accessibility is API for apps in disguise.
In this case I doubt they're networked apps so they probably don't have a server API.
> In this case I doubt they're networked apps so they probably don't have a server API.
I think it would be very unusual this decade for software used to run either a medical practice or tax accountants to not be networked. Most such practices have multiple doctors/accountants, each with their individual computer, and they want to be able to share files, so that if your doctor/accountant is away their colleague can attend to you. Managing backups/security/etc is all a lot easier when the data is stored in a central server (whether in the cloud or a closet) than on individual client machines.
Just because it is a fat client MFC-based Windows app doesn’t mean the data has to be stored locally. DCOM has been a thing since 1996.
Being “on the network” doesn’t mean there’s an accessible API. See QuickBooks Desktop. Intuit forces you into using their API, which is XML-based and ranges from slow to timing out.
Is the idea that someone will always reverse engineer it? Yes, but QuickBooks is brittle as is (you can count on at least one database corruption every year or two). I have zero interest in treading into unsupported territory when database corruption is involved and I’m likely going to need Intuit’s help recovering. We can try to restore from backup, but when there’s corruption it doesn’t always restore successfully, or the corruption was lingering silently for some time and rears its head again after a successful restore, and then we’re back to needing Intuit’s help.