We already know Anthropic does open source for a while such as the "flawed" MCP spec and "skills" spec.

This release is only done on other open-weight LLMs which have been released and even though they will use this research on their own closed Claude models, they will never release an open-weight Claude model even if it is for research purposes.

So this does not count, and it is specifically for the sake of this research only.

It's literally an open model that generates natural language text (or one that takes in text and turns it into activations). Why does engagement with the local models community "not count" if it isn't Claude? That makes very little sense to me.

Because we know what Embrace, Extend, and Extinguish means for example.They're leeching off opensource, not contributing in any meaningful way.

https://github.com/kitft/natural_language_autoencoders

Here’s the full source code for training your own NLA, provided by Anthropic.

Sorry, what are they embracing and extending?

Chinese open models? /s

To counter the grandparent you’re replying to: Embrace, Extend & Extinguish is a Microsoft strategy. So is FUD, and that’s all this is.

Humanity!

Those are generally used by someone who is behind. See: everything meta does.