Agreed but tools that allowed lay people to look at "what's happening in latent space" would be really cool and at least allow people not writing a journal article to get a better sense of what these models are doing.

Right now, I don't know where a journalist would even begin.

I don't think even the people at the forefront of AI are able to decode what's going on in the latent space, much less the average joe. We are given these clean examples as illustrative, but the reality is a totally jumbled incoherent mess.

Not true at all. You can take a vector for a given embedding and compare it to other things in that area of latent space to get a sense for how it is categorized by the model. You can even do this layer by layer to see how the model evolves its understanding.

That was pointed at Crowdstrike - the authors of the study - who should definitely have that skill level.