Or you could have just said "they can't form new memories."

Sure, if you want to speak with the precision of a sledgehammer instead of a scalpel

I actually wasn't aware of this story. The steady stream of unexpected and enriching information like this is exactly why I love hackernews.

I thought maybe people would be curious to read about how we came to understand the condition and the history behind it, as well as any associated information. Forgive me for such a deep transgression as this assumption.

That is a descriptive surface level reduction. Now do the work to define what that actually means for the intelligence.

Nobody else in the thread is making an argument that relies on the distinction.

"Intelligence" is used most commonly to refer to a class or collection of cognitive abilities. I don't think there is a consensus on an exact collection or specific class that the word covers, even if you consider specific scientific domains.

LLMs have honestly been a fun way to explore that. They obviously have a "kind" of intelligence, namely pattern recall. Wrap them in an agent and you get another kind: pattern composition. Those kinds of intelligences have been applied to mathematics for decades, but LLMs have allowed use to apply them to a semantic text domain.

I wonder if you could wrap image diffusion models in an agent set up the same way and get some new ability as well.