The author is not wrong. You seem unaware of how nascent the field of LLM interpretability research is.

See this thread and article from earlier today showing what we're still able to learn from these interpretability experiments.

https://news.ycombinator.com/item?id=47322887