That's a neat thought. What's the granularity of the text getting embedded? I assume that makes a large difference in what the average vector ends up representing?
That's a neat thought. What's the granularity of the text getting embedded? I assume that makes a large difference in what the average vector ends up representing?
~300 token chunks right now. Have other exciting embedding strategies in the works.