This! That's what I wanted since LLMs learned how to code.

And in fact, I think I saw a paper / blog post that showed exactly this, and then... nothing. For the last few years, the tech world became crazy with code generation, with forks of VSCode hooked to LLMs worth billions of dollars and all that. But AI-based code analysis is remarkably poor. The only thing I have seen resembling this is bug report generators, which is I believe is one of the worst approach.

The idea you have, that I also had and I am sure many thousands of other people had seem so obvious, why is no one talking about it? Is there something wrong with it?

The thing is, using such a feature requires a brain between the keyboard and the chair. A "surprising" token can mean many things: a bug, but also a unique feature, anyways, something you should pay attention to. Too much "green" should also be seen as a signal. Maybe you reinvented the wheel and you should use a library instead, or maybe you failed to take into account a use case specific to your application.

Maybe such tools don't make good marketing. You need to be a competent programmer to use them. It won't help you write more lines faster. It doesn't fit the fantasy of making anyone into a programmer with no effort (hint: learning a programming language is not the hard part). It doesn't generate the busywork of AI 1 introducing bugs for AI 2 to create tickets for.

Just to point...

> Is there something wrong with it?

> Maybe such tools don't make good marketing.

You had the answer the entire time :)

Features that require a brain between the AI and key-presses just don't sell. Don't expect to see them for sale. (But we can still get them for free.)

I don’t think I understand your point.

Are you saying that people of a certain competence level lose interest in force-multiplying tools? I don’t think you can be saying that because there’s so much contrary evidence. So what are you saying?

Other way around. The masses aren’t interested in force-multiplying tools. They only want to buy force-eliminating tools. They don’t want to work smarter or harder. They don’t want to work at all.

A fairly misanthropic view that hasn’t born out in my experience.

I'm saying they don't sell.

Some times people want them so badly that they will self-organize and collaborate outside of a market to make them. But a market won't supply them.

And yes, it's a mix of many people not being competent enough to see the value on them, markets putting pressure on companies to listen disproportionately to those people, publicity having a very low signal to noise ratio that can't communicate why a tool is good, and companies not respecting their customers enough to build stuff that is good for them (that last one isn't inherent to a market economy, but it near universal nowadays).

Either way, the software market just doesn't sell tools as useful as the GP is talking about.

> The idea you have, that I also had and I am sure many thousands of other people had seem so obvious, why is no one talking about it? Is there something wrong with it?

I expect it definitely requires some iteration, I don't think you can just map logits to heat, you get a lot of noise that way.

Honestly I just never really thought about it. But now it seems obvious that AI should be continuously working in the background to analyze code (and the codebase) and could even tie into the theme of this thread by providing some type of programming HUD.