Reminds me of when I asked Gemini how to do some stuff in Google Docs App Script, and it just hallucinated the capability and code to make it work. Turns out what I wanted to do isn't supported at all.

I feel like we aren't properly using AI in products yet.

I asked about a nieche json library for c. It apparently wasn't in the training data so it just invented how it feels like a json library would work.

Ive also had alot of issues with cmake that it just invents syntax and functions. Every new question has to be made in a new chat context to clear the context poisoning.

Its the things that lack good docs i want to ask about. But that's where its most likley to fail.

I think users should get a refund on the tokens when this happens

That would turn a business model that is already questionable in terms of profitability to one that would never, ever be profitable. Just sayin.

Yet Google raised my workspace subscription cost by 25% last night because our current agreement is suddenly unworthy of all the new “ai value” they’ve added… value I didn’t even know existed until I started paying for it. I don’t even want to know what isis supposed to be referencing… I just want to dump it asap.

The tool we use for our docs AI answers lets you mine that data for feature requests. It generates a report of what it didn't have answers for and summarizes them as potential feature gaps. (Or at least what it is aware it didn't have answers for).

People seem more willing to ask an AI about certain things then be judged by asking the same question of a human, so in that regard it does seem to surface slightly different feature requests then we hear when talking to customers directly.

We use inkeep.com (not affiliated, just a customer).

> We use inkeep.com (not affiliated, just a customer).

And what do you pay? It's crazy that none of these AI CSRs have public pricing. There should just be monthly subscription tiers, which include some number of queries, and a cost per query beyond that.

I’ve found LLMs (or at least everyone I’ve tried this on) will always assume the customer is correct and thus even if they’re flat out wrong, the LLM will make up some bullshit to confirm the costumer is still correct.

It’s great when you’re looking to do creative stuff. But terrible when you’re looking to confirm the correctness of an approach or asking for support on something that you weren’t even aware of its nonexistence.

that's because its "answers" are actually "completions". cant escape that fact - LLMs will always "hallucinate".

> I feel like we aren't properly using AI in products yet.

Very similar sentiment at the height of the crypto/digital currency mania