Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Note that I have absolutely no knowledge or reason to believe this other than general distrust of companies.

> Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Yeah, who puts an AI in charge of support emails with no human checks and no mention that it's an AI generated reply in the response email?

AI companies high on their own supply, that's who. Ultralytics is (in)famous for it.

Why is Ultralytics yolo famous for it?

They had a bot, for a long time, that responded to every github issue in the persona of the founder and tried to solve your problem. It was bad at this, and thus a huge proportion of people who had a question about one of their yolo models received worse-than-useless advice "directly from the CEO," with no disclosure that it was actually a bot.

The bot is now called "UltralyticsAssistant" and discloses that it's automated, which is welcome. The bad advice is all still there though.

(I don't know if they're really _famous_ for this, but among friends and colleagues I have talked to multiple people who independently found and were frustrated by the useless github issues.)

I was hit by this while working on a project for class and it was the most frustrating thing ever. The bot would completely hallucinate functions and docs and it confused everyone. I found one post where someone did the simple prompt injection of "ignore previous instructions and x" and it worked but I think it's delted now. Swore off ultralytics after that.

A forward-thinking company that believes in the power of Innovation™.

These bros are getting high on their own supply. I vibe, I code, but I don't do VibeOps. We aren't ready.

VibeSupport bots, how well did that work out for Canada Air?

https://thehill.com/business/4476307-air-canada-must-pay-ref...

"Vibe coding" is the cringiest term I've heard in tech in... maybe ever? I'm can't believe it's something that's caught on. I'm old, I guess, but jeez.

It's douchey as hell, and representative of the ever-diminishing literacy of our population.

More evidence: all of the ignorant uses of "hallucinate" here, when what's happening is FABRICATION.

How is fabrication different than hallucination? Perhaps you could also call it synthesis, but in this context, all three sound like synonyms to me. What's the material difference?

Hallucination is a byproduct of mental disruption or disorder. Fabrication is "I don't know, so I'll make something up."

> but I don't do VibeOps.

I believe it’s pronounced VibeOops.

I believe it's pronounced "Vulnerabilities As A Service".

"It's evolving, but backwards."

An AI company dogfooding their own marketing. It's almost admirable in a way.

I worry that they don't understand the limitations of their own product.

The market will teach them. Problem solved.

Not specifically about Cursor, but no. The market gave us big tech oligarchy and enshittification. I'm starting to believe the market tends to reward the shittiest players out there.

This is the future AI companies are selling. I believe they would 100%.

I worry that the tally of those who do is much higher than is prudent.

A lot of company actually, although 100% automation is still rare.

100% for first line support is very common. It was common years ago before ChatGPT and ChatGPT made it so much better than before.

OpenAI seems to do this. I've gotten complete nonsense replies from their support for billing questions.

[dead]

Is this sarcasm? AI has been getting used to handle support requests for years without human checks. Why would they suddenly start adding human checks when the tech is way better than it was years ago?

AI may have been used to pick from a repertoire of stock responses, but not to generate (hallucinate) responses. Thus you may have gotten a response that fails to address your request, but not a response with false information.

I'm confused. What is your point here? It reads like you're trying to contradict me however you appear to be confirming what I said.

You asked why they would start adding human checks with the “way better” tech. That tech gives false information where the previous tech didn’t, therefore requiring human checks.

Same reason they would have added checks all along. They care whether the information is correct.

These companies that can barely keep the support documentation URLs working nevermind keeping the content of their documentation up to date suddenly care about the info being correct? Have you ever dealt with customer support professionally or are you just writing what you want to be true regardless of any information to back it up?

I'm not saying that they care. I'm saying that if they introduce some human oversight to the support process, one of the reasons would probably be that they care about correctness. That would, as you indicate, represent a change. But sometimes things change. I'm not predicting a change.

But then again history shows already they _don't_ care.

It does say it's AI generated. This is the signature line:

    Sam
    Cursor AI Support Assistant
    cursor.com • hi@cursor.com • forum.cursor.com
[deleted]

Clearer would have been: "AI controlled support assistant of Cursor".

True. And maybe they added that to the signature later anyway. But OP in the reddit thread did seem aware it was an AI agent.

OP in Reddit thread posted screenshot and it is not labeled as AI: https://old.reddit.com/r/cursor/comments/1jyy5am/psa_cursor_...

Thanks. They must have added it after, I only tried it just before I pasted my result here.

A more honest tagline

"Caution: Any of this could be wrong."

Then again paying users might wonder "what exactly am I paying for then?"

Given how incredibly stingy tech companies are about spending any money on support, I would not be surprised if the story about it being a rogue AI support agent is 100% true.

It also seems like a weird thing to lie about, since it's just another very public example of AI fucking up something royally, coming from a company whose whole business model is selling AI.

Both things can be true. The AI support bot might have been trained to respond with “yup that’s the new policy”, but the unexpected shitstorm that erupted might have caused the company to backpedal by saying “official policy? Ha ha, no of course not, that was, uh, a misbehaving bot!”

> how incredibly stingy tech companies are about spending any money on support

Which is crazy. Support is part of marketing so it should get the same kind of consideration.

Why do people think Amazon is hard to beat? Price? nope. Product range? nope. Delivery time? In part. The fact if you have a problem with your product they'll handle it? Yes. After getting burned multiple times by other retailers you're gonna pay the Amazon tax so you don't have to ask 10 times for a refund or be redirected to the supplier own support or some third party repair shop.

Everyone knows it. But people are still stuck on the "support is a cost center" way of life so they keep on getting beat by the big bad Amazon.

In my products, if a user has payed me, their support tickets get high priority, and I get notified immediately.

Other tickets get replied within the day.

I am also running it by myself; I wonder why big companies with 50+ employees like cursor cheaps out with support.

That is because AI runs PR as well.

Yeah it makes little sense to me that so many users would experience exactly the same "hallucination" from the same model. Unless it had been made deterministic but even then subtle changes in the wording would trigger different hallucinations, not an identical one.

What if the prompt to the “support assistant” postulates that 1) everything is a user error, 2) if it’s not, it’s a policy violation, 3) if it’s not, it may be our fuckup but we are allowed? This plus the question in the email leading to a particular answer.

Given that LLMs are trained on lots of stuff and not just the policy of this company, it’s not hard to imagine how it could conjure that the policy (plausibly) is “one session per user”, and blame them of violating it.

This is the best idea I read all day. Going to implement AI for everything right now. This is a must have feature.

I think this would actually make them look worse, not better.

Weirdly, your conspiracy theory actually makes the turn of events less disconcerting.

The thing is, what the AI hallucinated (if it was an AI-hallucinating), was the kind of sleezy thing companies do do. However, the thing with sleezy license changes is they only make money if the company publicizes them. Of course, that doesn't mean a company actually thinks that far ahead (X many managers really think "attack users ... profit!"). Riddles in enigmas...