I don't think even the people at the forefront of AI are able to decode what's going on in the latent space, much less the average joe. We are given these clean examples as illustrative, but the reality is a totally jumbled incoherent mess.
I don't think even the people at the forefront of AI are able to decode what's going on in the latent space, much less the average joe. We are given these clean examples as illustrative, but the reality is a totally jumbled incoherent mess.
Not true at all. You can take a vector for a given embedding and compare it to other things in that area of latent space to get a sense for how it is categorized by the model. You can even do this layer by layer to see how the model evolves its understanding.