Bait what exactly ? Getting the user to type "yes" ? Great accomplishment.

Sometimes I want the extra paragraph, sometimes I don't. Sometimes I like the suggested follow up, sometimes I don't. Sometimes I have half an hour in front of me to keep digging into a subject, sometimes I don't.

Why should the LLM "just write the extra paragraph" (consuming electricity in the process) to a potential follow up question a user might, or might not, have ? If I write a simple question I hope to get a simple answer, not a whole essay answering stuff I did not explicitly ask for. And If I want to go deeper, typing 3 letters is not exactly a huge cost.

You send all the tokens an extra time at least

I’m not privy to their data on what this does to engagement, but intuitively it seems like the extra inference/token cost this incurs doesn’t align with their current model.

If they were doing it to API customers, sure, but getting the free or flat-rate customers to use more tokens seems counterproductive.

It juices their "engagement" metrics, which is the drug of choice for investors, right up there with net promoter scores.

We’ll see how this plays out. It’s a turbocharged version of enshittification, at a time when other models are showing stronger growth in B2B and other valuable markets.

I canceled my ChatGPT subscription and jumped to Claude, not for silly political theater, but just because the product was better for professional use. Looking at data from Ramp and others, I’m not alone.