Honestly, this is going to be huge for healthcare. There's an incredible amount of waste due to incumbent tech making interoperability difficult.
Honestly, this is going to be huge for healthcare. There's an incredible amount of waste due to incumbent tech making interoperability difficult.
Hopefully.
I’ve implemented quite a few RPA apps and the struggle is the request/response turn around time for realtime transactions. For batch data extract or input, RPA is great since there’s no expectation of process duration. However, when a client requests data in realtime that can only be retrieved from an app using RPA, the response time is abysmal. Just picture it - Start the app, log into the app if it requires authentication (hope that the authentication's MFA is email based rather than token based, and then access the mailbox using an in-place configuration with MS Graph/Google Workspace/etc), navigate to the app’s view that has the data or worse, bring up a search interface since the exact data isn’t known and try and find the requested data. So brittle...
It is.
CTO of healthcare org here.
I just put a hold on a new RPA project to keep an eye on this and see how it develops.
According to their docs, Anthropic will sign a BAA.
Out of curiosity, how are high risk liability enviroments like yours coming to terms with the non-deterministic nature of models like these? Eg. the non-zero chance that it might click a button it *really* shouldn't as demonstrated in the failure demo.
Technical director at another company here: We have humans double-check everything, because we're required by law to. We use automation to make response times faster, or to do the bulk of the work and then just have humans double-check the AI. To do otherwise would be classed as "a software medical device", which needs documentation out the wazoo, and for good reason. I'm not sure you could even have a medical device where most of your design doc is "well I just hope it does the right thing, I guess?".
Sometimes, the AI is more accurate or safer than humans, but it still reads better to say "we always have humans in the loop". In those cases, we reap the benefits of both: Use the AI for safety, but still have a human fallback.
I'm curious, what does your human verification process look like? Does it involve a separate interface or a generated report of some kind? I'm currently working on an tool for personal use, that records actions and triggers them at later stage on when specified event occurs. For verification, generating a CSV report after the process is complete and backing it up with screen recordings.
It's a separate interface where the output of the LLM is rated for safety, and anything unsafe opens a ticket to be acted upon by the medical professionals.
I don't know yet. We may not do it.
We haven't deployed a model like this, it's new.
I've done a ton of various RPAs over the years, using all the normal techniques, and they're always brittle and sensitive to minor updates.
For this, I'm taking a "wait and see" approach. I want to see and test how well it performs in the real world before I deploy it, and wait for it to come out of beta so Anthropic will sign a BAA.
The demo is impressive enough that I want to give the tech a chance to mature before my team and I invest a ton of time into a more traditional RPA.
At a minimum, if we do end up using it, we'll have solid guard rails in place - it'll run on an isolated VM, all of its user access will be restricted to "read only" for external systems, and any content that comes from it will go through review by our nurses.
AWS Bedrock deployed models, which include Anthropic Claude models, claim HIPAA compliance eligibility.
What is a BAA?
https://www.techtarget.com/healthtechsecurity/feature/What-I... agreement that lets a business associate handle HIPAA-protected data.
Healthcare has the extra complication of HIPAA / equivalent local laws, and institutions being extremely unwilling to process patient data on devices they don't directly control.
I don't think this is going to work in that industry until local models get good enough to do it, and small enoguh to be affordable to hospitals.
Hospitals use O365, there are HIPAA-compliant editions of any prominent cloud service.
That industry only thinks it controls its devices. Crowdstrike showed there are many bridges over that moat.
Their concern is compliance, not security.
Based on Tog's paradox (https://news.ycombinator.com/item?id=41913437) the moment this becomes easy, it will become hard again with extra regulation and oversight and documentation etc.
Similarly I expect that once processing/searching laws/legal records becomes easy through LLMs, we'll compensate by having orders of magnitude more laws, perhaps themselves generated in part by LLMs.
> There's an incredible amount of waste due to incumbent tech making interoperability difficult.
So the solution to that is to add another layer of complex AI tech on top of it?
Well nothing else we've tried has worked.
I work with healthcare in the UK. There’s a promising approach called CSV files which is revolutionising some of my workflows :)
We’ll see. Having worked in this space in the past, the technical challenges are able to overcome today with no new technology: its a business sales and regulation challenge more than a tech one.
Sometimes.
In my case I have a bunch of nurses that waste a huge amount of time dealing with clerical work and tech hoops, rather than operating at the top of their license.
Traditional RPAs are tough when you're dealing with VPNs, 2fa, remote desktop (in multiple ways), a variety of EHRs and scraping clinical documentation from poorly structured clinical notes or PDFs.
This technology looks like it could be a game changer for our organization.
True, 2FA and all these little details that exist now have made this automation quite insanely complicated. It is of course necessary that we have 2FA etc, but there is huge potential in solving this I believe.
From a security standpoint, what's considered the "proper" way of assigning a bot access based on a person's 2FA? Would that be some sort of limited scope expiring token like GitHub's fine-grained personal access tokens?
Security isn't the only issue here. There are more and less "proper" ways of giving bots access to a system. But the whole field of RPA exists in large part because the vendors don't want you to access the system this way. They aren't going to give you a "proper" way of assigning bot access in a secure way, because they explicitly don't want you to do it in the first place.
I don't know, I feel like it has to be some sort of near field identity proof. E.g. as long as you are wearing a piece of equipment to a physical computer near you can run all those automations for you, or similar. I haven't fully thought what the best solution could be or whether someone is already working on it, but I feel like there has to be something like that, which would allow you better UX in terms of access, but security at the same time.
So maybe like an automated ubikey that you can opt in to a nearby computer to have all the access. Especially if working from home, you can set it at a state where if you are in 15m radius of your laptop it is able to sign all access.
Because right now, considering amount of tools and everything I use and with single sign on, VPN, Okta, etc, and how slow they seem to be, it's extremely frustrating process constantly logging in to everywhere, and it's almost like it makes me procrastinate my work, because I can't be bothered. Everything about those weird little things is absolutely terrible experience, including things like cookie banners as well.
And it is ridiculous, because I'm working from home, but frustratingly high amount of time is spent on this bs.
A bluetooth wearable or similar to prove that I'm nearby essentially, to me that seems like it could alleviate a lot of safety concerns, while providing amazing dev/ux.
That's a really cool idea.
The main attack vector would then probably be some man-in-the-middle intercepting the signal from your wearable, which leads me to wonder whether you could protect yourself by having the responses valid for only an extremely short duration, e.g. ~1ms, such that there's no way for an attacker to do anything with the token unless they gain control over compute inside your house.
Maybe we could build an authenticator as part of the RPA tool or bot client itself. This way, the bot could generate time-based one-time passwords (TOTPs).
Precisely why I built therapedia.io
I agree that at the business contract level, it's more about sales and regulations than tech. But in my experience working close to minimum wage white-collar jobs, about 1 in 4 of my coworkers had automated most of their job with some unholy combination of VBScript, Excel wizardry, AutoHotKey, Selenium, and just a bit of basic Python sprinkled in; IT, security, and privacy concerns notwithstanding. Some were even dedicated enough to pay small amounts out-of-pocket for certain tools.
I'd bet that until we get the risks whittled down enough for larger organizations to adopt this on a wide scale, the biggest user group for AI automation tools will be at the level of individual workers who are eager to streamline their own tasks and aren't paid enough to care about those same risks.
Or you'll start getting a captcha while trying to pump insulin
(Shrug) AI is now better at CAPTCHAs than I am, so bring it on I guess.