> we're not actually on the right track to achieve real intelligence.

Real intelligence means you have to say "I don't know" when you don't know, or ask for help, or even just saying you refuse to help with the subtext being you don't want to appear stupid.

The models could ostensibly do this when it has low confidence in it's own results but they don't. What I don't know if it's because it would be very computationally difficult or it would harm the reputation of the companies charging a good sum to use them.

> Real intelligence means you have to say "I don't know" when you don't know

I have met many supposedly intelligent, certainly high status, humans who don't appear to be able to do that either.

I have more confidence we can train AIs to do it, honestly.

That's just not how they work, really. They don't know what they don't know and their process requires an output.

I think they're getting better at it, but it's likely just the number of parameters getting bigger and bigger in the SOTA models more than anything.

They do know what they don't know. There's a probability distribution for outputs that they are sampling from. That just isn't being used for that purpose.

Common misconception. As far we know, LLMs are not calibrated, i.e. their output "probabilities" are not in fact necessarily correlated with the actual error rates, so you can't use e.g. the softmax values to estimate confidence. It is why it is more accurate to talk about e.g. the model "logits", "softmax values", "simplex mapping", "pseudo-probabilities", or even more agnostically, just "output scores", unless you actually have strong evidence of calibration.

To get calibrated probabilities, you actually need to use calibration techniques, and it is extremely unclear if any frontier models are doing this (or even how calibration can be done effectively in fancy chain-of-thought + MoE models, and/or how to do this in RLVR and RLHF based training regimes). I suppose if you get into things like conformal prediction, you could ensure some calibration, but this is likely too computationally expensive and/or has other undesirable side-effects.

EDIT: Oh and also there are anomaly detection approaches, which attempt to identify when we are in outlier space based on various (e.g. distance) metrics based on the embeddings, but even getting actual probabilities here is tricky. This is why it is so hard to get models to say they "don't know" with any kind of statistical certainty, because that information isn't generally actually "there" in the model, in any clean sense.

I don't think it's that hard to get them to say "I don't know"

I'm pretty sure they are actively trained to avoid it.

Besides, like, what would you do if you asked your $200/mo AI something and it blanked on you?

> I'm pretty sure they are actively trained to avoid it.

I'm not sure who is doing what training exactly, but I can say that (inconsistently!) some of my attempts to get it to solve problems that have not yet actually been solved, e.g. the Collatz conjecture, have it saying it doesn't know how to solve the problem.

Other times it absolutely makes stuff up; fortunately for me, my personality includes actually testing what it says, so I didn't fall into the sycophantic honey trap and take it seriously when it agreed with my shower thoughts, and definitely didn't listen when it identified a close-up photo of some solanum nigrum growing next to my tomatoes as being also tomatoes.

> Besides, like, what would you do if you asked your $200/mo AI something and it blanked on you?

I'd rather it said "IDK" than made some stuff up. Them making stuff up is, as we have seen from various news stories about AI, dangerous.

"Well-unknown" questions are maybe the one situation where LLMs will say "I don't know", simply because of all the overwhelming statements in its training data referring to the question as unknown. It'd be interesting to see how LLMs would adapt to changing facts. Suppose the Collatz conjecture was proven this year, and the next the major models got retrained. Would they be able to reconcile all the new discussion with the previous data?

It's not hard to get them to say "I don't know", and they will do so regularly. It's hard to get them to say "I don't know" reliably (i.e. to say it when they don't actually know and to not say it when they do know). And in general even for statements or tasks they do 'know' (i.e. normally get right), they will occasionally get wrong.

I don't know if we are talking past each other, but I don't think this conversation is about absolute probabilities? The question is about relative uncertainty, and the softmax values are just fine for that.

It is too computationally expensive, which is why nobody does this for production inference. But there are alignment tools to extract out these latent-space probabilities for researchers in the frontier labs.

> The question is about relative uncertainty, and the softmax values are just fine for that.

They really aren't, especially if you consider the chain of thought / recursive application case, and also that you can't even assume e.g. a difference of 0.1 in softmax values means the same relative difference from input to input, or that e.g. an 0.9 is always "extremely confident", and etc. You really have no idea unless you are testing the calibration explicitly on calibration data.

> But there are alignment tools to extract out these latent-space probabilities for researchers in the frontier labs

You can get embeddings: if you can get calibrated probabilities, you'll need to provide a citation, because this would be a huge deal for all sorts of applications.

Relative probabilities. That means comparing 2+ alternatives, and we're only talking about the model's worldview, not objective reality. The math for that is relatively straightforward. "Yes" could be 0.9, and ok that means nothing. But If we artificially constraint outputs to "Yes" and "No", and calculate the softmax for Yes to be 0.7 and No to be 0.3, that does lead to a straightforward probability calculation. [Not the naïve calculation you would expect, because of how softmax is computed. But you can derive an equation to convert it into normalized probabilities.]

And now I'm certain we're taking past each other. I'm not talking about calibrated probabilities at all. Just the notion of "how confident do I feel about this?" which is what I interpreted the question above to be about. You can get that out of an LLM, with some work.

> But If we artificially constraint outputs to "Yes" and "No", and calculate the softmax for Yes to be 0.7 and No to be 0.3, that does lead to a straightforward probability calculation. [Not the naïve calculation you would expect, because of how softmax is computed. But you can derive an equation to convert it into normalized probabilities.]

There is nothing straightforward about this, and no, there is no such formula.

> I'm not talking about calibrated probabilities at all. Just the notion of "how confident do I feel about this?"

If all you care about is vibes / feels, sure. If you actually need numerical guarantees and quantitative estimates to make your "feelings" about confidence mean something to rigorously justify decisions, you need calibration. If you aren't talking about calibration in these discussions, you are missing probably the most core technical concept that addresses these issues seriously.

We're talking about artificial intelligence. Making computers think the way people do. People are are notoriously miscalibrated on their own self-assessed probabilities too.

Finding a way to objectively calibrate a sense of "how confident do I feel about this?" would be fantastic. But let's not move goal posts. It would still be incredibly useful to have a machine that can merely matches the equivalent statement of confidence or uncertainty that a human would assign to their mental model, even if badly calibrated.

IMO it is you who are moving the goalposts, most likely in an attempt to hide the fact you were unaware of calibration before this discussion.

> It would still be incredibly useful to have a machine that can merely matches the equivalent statement of confidence or uncertainty that a human would assign to their mental model, even if badly calibrated.

If human feelings are badly calibrated, they are useless here too, so no, I don't agree. Things like "confidence" only matter if they are actually tied to real outcomes in a consistent way, and that means calibration.

Please assume good faith.

I’m not clear what you mean by “know.” If you mean “the information is in the model” then I mostly agree, distributional information is represented somewhere. But if you mean that a model can actually access this information in a meaningful and accurate way—say, to state its confidence level—I don’t think that’s true. There is a stochastic process sampling from those distributions, but can the process introspect? That would be a very surprising capability.

yes:

> In this experiment, however, the model recognizes the injection before even mentioning the concept, indicating that its recognition took place internally.

https://www.anthropic.com/research/introspection

Having a probability distribution to sample from is not the same thing is knowing, because they don’t know anything about the provenance of the data that was used to build the distribution. They trust their training set implicitly by construction. They have no means to detect systematic errors in their training set.

You are talking about something different. If I ask you a yes/no question, and then ask you how certain you are, the answer you give is not an objective measurement of how likely you are to be right. You don't have access to that either. If you say "I'm very confident" or "Maybe 50/50" -- that is an assessment of your own internal weighted evidence, which is the equivalent of an LLM's softmax distribution.

Well, with thinking models, it’s not that simple. The probability distribution is next token. But if a model thinks to produce an answer, you can have a high confidence next token even if MCMC sampling the model’s thinking chain would reveal that the real probability distribution had low confidence.

Oh, you mean somewhere it is tracking the statistical likelihood of the output. Yeah I buy that, although I think it just tends towards the most likely output given the context that it is dragging along. I mean it wouldn’t deliberately choose something really statistically unlikely, that’s like a non sequitur.

Well, it's not tracking. As it predicts each token it is sampling from a probability distribution -- that's what the matrix multiplies are for. It gets a distribution over all tokens and then picks randomly according to that distribution. How flat or how spiky that distribution is tells you how confident it is in its answer.

But it then throws that distribution away / consumes it in the next token calculation. So it's not really tracking it per se.

From its point of view what does it mean "to know".

Is it the token (or set of tokens) that are strictly > 50% probable or is it just the highest probability in a set of probabilities?

While generating bullshit is not ideal for a lot of use cases you don't want your premier chat bot to say "I don't know" to the general public half the time. The investment in these things requires wide adoption so they are always going to favour the "guesses".

My theory is because the people building the models and in charge of directing where they go love the sycophantic yes-man behavior the models display

They don't like hearing "I don't know"

You can TELL the models to do this and they'll follow your prompt.

"Give me your answer and rate each part of it for certainty by percentage" or similar.

could you please tell me how it generates that certainty score?

Vibes.

The whole thing is a statistical model, that's just what it is. No, I cannot in a reasonable way dissect how an LLM works to a satisfactory level to a skeptic.

He's not a skeptic, he's asking you to explicitly state your reasoning with the expectation that either the readers will learn something or (more likely) you will realize that your thought and speech pattern there was the equivalent of an LLM hallucinating. Yes you can prompt it as you suggested and yes you will generally receive a convincing answer but it is not doing what you seem to think it is doing ie the generated rating is complete bullshit that the model pulled out of its proverbial ass.

are you actually curious or do you just want to argue against it?

I think you're obviously wrong (based on my relatively detailed but certainly somewhat out of date and not expert level knowledge of LLM internals) but if you're willing to explain your reasoning I'm willing to reconsider my own position in light of any new information or novel observations you might provide.

GP is obviously wrong, and probably doesn't know about calibration and/or that it isn't even clear how to calibrate frontier models in the manner we need, given how complex and expensive the training is, and how tricky calibration becomes in e.g. mixture-of-experts and chain of thought approaches.

I suspect that introducing the calibration concept might be a case of too much too soon for some people.

As far as I understand it, the various probability matrices boil down to: what token has the highest likelihood of coming next, given this set of input tokens. Which then all gets chucked away and rebuilt when the most likely token is appended to the input set.

Objective assessment of internal state - again, to my non-expert eye - doesn’t appear to have any way to surface to me.

Big-if my rough working understand is more or less correct - your calibration point makes a lot of sense to me. I’m not sure that it would make sense to someone who eg considers some form of active thinking process that is intellectualising about whether to output this or that token.

"I can only explain my beliefs to people who promise they'll agree" is certainly a unique take.

It's a statistical model for words and sentences, not knowledge. What does the LLM knows about having a pebble in your shoes, or drinking a nice cup of coffee?

You can just tell the agent to do exactly that

I've had various agents backed by various models ignore the shit out of various rules and request at varying rates but they all do it.

When you point it out "Oh yes, I did do that which is contrary to the rules, request <whatever>.. Anyway..."

If you are on a sota model and your context window is less than 100k tokens and you don't have any vague or contradicting rules, then I've almost never seen a rule broken

The most common failure I've seen come from tools that pollute their context with crap and the llm will forget stuff or just get confused from all the irrelevant sentences; which if the report is true, is probably what these ai notetakers are guilty of. This problem gets exacerbated if these tools turn on the 1M context window version.

Yeah, that's exactly why I have full confidence in that system, especially for medical notetaking. /s

Except you can't be sure it isn't producing nonsense when you do this, and generally the model(s) will be overconfident. This has been studied, see e.g. https://openreview.net/pdf?id=E6LOh5vz5x

    > An alternative way to obtain uncertainty estimates from LLMs is to prompt them directly. One benefit of this approach is that it requires no access to the internals of the model. However, this approach has produced mixed results: LLMs can sometimes verbalize calibrated confidence levels (Lin et al., 2022a; Tian et al., 2023), but can also be highly overconfident (Xiong et al., 2024). Interestingly, Xiong et al. (2024) found that LLMs typically state confidence values in the range of 80-100%, usually in multiples of 5, potentially in imitation of how humans discuss confidence levels. Nevertheless, prompting strategies remain an important tool for uncertainty quantification, along with measures based on the internal state (such as MSP).

>You can just tell the agent to do exactly that

You can.

It just won't do it.