after you go from from millions of params to billions+ models start to get weird (depending on training) just look at any number of interpretability research papers. Anthropic has some good ones.

> things start to get weird

> just look at research papers

You didn't add anything other than vibes either.

Interesting, what kind of weird?

Getting weird doesn’t mean calling it text prediction is actually ‘bullshit’? Text prediction isn’t pejorative…