> Tinfoil hat me says that it was a policy change that they are blaming on an "AI Support Agent" and hoping nobody pokes too much behind the curtain.

Yeah, who puts an AI in charge of support emails with no human checks and no mention that it's an AI generated reply in the response email?

AI companies high on their own supply, that's who. Ultralytics is (in)famous for it.

Why is Ultralytics yolo famous for it?

They had a bot, for a long time, that responded to every github issue in the persona of the founder and tried to solve your problem. It was bad at this, and thus a huge proportion of people who had a question about one of their yolo models received worse-than-useless advice "directly from the CEO," with no disclosure that it was actually a bot.

The bot is now called "UltralyticsAssistant" and discloses that it's automated, which is welcome. The bad advice is all still there though.

(I don't know if they're really _famous_ for this, but among friends and colleagues I have talked to multiple people who independently found and were frustrated by the useless github issues.)

I was hit by this while working on a project for class and it was the most frustrating thing ever. The bot would completely hallucinate functions and docs and it confused everyone. I found one post where someone did the simple prompt injection of "ignore previous instructions and x" and it worked but I think it's delted now. Swore off ultralytics after that.

A forward-thinking company that believes in the power of Innovation™.

These bros are getting high on their own supply. I vibe, I code, but I don't do VibeOps. We aren't ready.

VibeSupport bots, how well did that work out for Canada Air?

https://thehill.com/business/4476307-air-canada-must-pay-ref...

"Vibe coding" is the cringiest term I've heard in tech in... maybe ever? I'm can't believe it's something that's caught on. I'm old, I guess, but jeez.

It's douchey as hell, and representative of the ever-diminishing literacy of our population.

More evidence: all of the ignorant uses of "hallucinate" here, when what's happening is FABRICATION.

How is fabrication different than hallucination? Perhaps you could also call it synthesis, but in this context, all three sound like synonyms to me. What's the material difference?

Hallucination is a byproduct of mental disruption or disorder. Fabrication is "I don't know, so I'll make something up."

> but I don't do VibeOps.

I believe it’s pronounced VibeOops.

I believe it's pronounced "Vulnerabilities As A Service".

"It's evolving, but backwards."

An AI company dogfooding their own marketing. It's almost admirable in a way.

I worry that they don't understand the limitations of their own product.

The market will teach them. Problem solved.

Not specifically about Cursor, but no. The market gave us big tech oligarchy and enshittification. I'm starting to believe the market tends to reward the shittiest players out there.

This is the future AI companies are selling. I believe they would 100%.

I worry that the tally of those who do is much higher than is prudent.

A lot of company actually, although 100% automation is still rare.

100% for first line support is very common. It was common years ago before ChatGPT and ChatGPT made it so much better than before.

OpenAI seems to do this. I've gotten complete nonsense replies from their support for billing questions.

[dead]

Is this sarcasm? AI has been getting used to handle support requests for years without human checks. Why would they suddenly start adding human checks when the tech is way better than it was years ago?

AI may have been used to pick from a repertoire of stock responses, but not to generate (hallucinate) responses. Thus you may have gotten a response that fails to address your request, but not a response with false information.

I'm confused. What is your point here? It reads like you're trying to contradict me however you appear to be confirming what I said.

You asked why they would start adding human checks with the “way better” tech. That tech gives false information where the previous tech didn’t, therefore requiring human checks.

Same reason they would have added checks all along. They care whether the information is correct.

These companies that can barely keep the support documentation URLs working nevermind keeping the content of their documentation up to date suddenly care about the info being correct? Have you ever dealt with customer support professionally or are you just writing what you want to be true regardless of any information to back it up?

I'm not saying that they care. I'm saying that if they introduce some human oversight to the support process, one of the reasons would probably be that they care about correctness. That would, as you indicate, represent a change. But sometimes things change. I'm not predicting a change.

But then again history shows already they _don't_ care.

It does say it's AI generated. This is the signature line:

    Sam
    Cursor AI Support Assistant
    cursor.com • hi@cursor.com • forum.cursor.com
[deleted]

Clearer would have been: "AI controlled support assistant of Cursor".

True. And maybe they added that to the signature later anyway. But OP in the reddit thread did seem aware it was an AI agent.

OP in Reddit thread posted screenshot and it is not labeled as AI: https://old.reddit.com/r/cursor/comments/1jyy5am/psa_cursor_...

Thanks. They must have added it after, I only tried it just before I pasted my result here.

A more honest tagline

"Caution: Any of this could be wrong."

Then again paying users might wonder "what exactly am I paying for then?"