I'm giving you, the user, the easiest ability you've most likely ever had to explore embedding space yourself. Embeddings are tricky and can mislead, but they do often compose surprisingly intuitively, especially when you've played and built up a bit of an intuition for it.

What is the impact of misleading embeddings, how do they compose? I honestly am interested but don't know enough to understand what you're saying.

Why would I want to explore the embedding space myself, isn't this a tool where I can run cross-data exploratory analyses against unstructured data, where it's pre-populated with content?

We can iterate fast with understanding useful paradigms of vector manipulation. Yesterday I added `debias_vector(axis, topic)` and l2_normalization guidance.

The manifold structure of embedding spaces isn't semantically uniform, you've found a nice little novelty thing but it's not rigorous, and using AI slop to name this vector algebra instead of finding or running a benchmark to show that its actually works better.