Seems to under-perform voyage-3-large on the same benchmark. At the same time, I'm unsure how useful benchmarks are for embeddings.
Seems to under-perform voyage-3-large on the same benchmark. At the same time, I'm unsure how useful benchmarks are for embeddings.
I had the same thought, although voyage is 32k vs 128k for cohere 4.
Anecdotal evidence points to benchmarks correlating with result quality for data I've dealt with. I haven't spent a lot of time comparing results between models, because we were happy with the results after trying a few and tuning some settings.
Unless my dataset lines up really well with a benchmark's dataset, creating my own benchmark is probably the only way to know which model is "best".
Are people using 32k embeddings and no longer chunking?
It feels like embedding content that large -- especially in dense texts -- will lead to loss of fidelity/signal in the output vector.
My understanding is that long context models can create embeddings that are much better at capturing the overall meaning, and are less effective (without chunking) for documents that consist of short standalone sentences.
For example, "The configuration mentioned above is critical" now "knows" what configuration is being referenced, along with which project and anything else talked about in the document.
when you say long context models as less effective for documents that consist of short sentences, do you mean that embedding models that have long context capabilities tend to be worse with shorter sentences or are you just saying that _using_ their large context windows will be less effective for docs with short sentences
It is common to use long context embedding models as a feature extractor for classification models.
Which benchmark are you referring to?
Voyage-3-large is a text-only and much larger model than Embed-v4. If you want to unlock multimodality with Voyage-3-large, you'd have to either OCR (really bad results usually) or use a VLM to parse your data into textual descriptions (this works alright, but the cost of using a VLM will jack-up your data-pre-processing costs).
I think anyone that cares enough about embedding performance to use niche models is probably parsing their PDF's into some sort of textual format. Otherwise you need orient your all your pipelines to handle images which adds significant complexity (hybrid search, reranking, LLM calls, etc - all way harder with images).
Not to mention an image is optimistically 50 KB vs the same page represented as markdown is maybe 2–5 KB. When you're talking about pulling in potentially hundreds of pages, that's a 10–20x increase in storage, memory usage, and network overhead.
I do wish they had a more head-to-head comparison with voyage. I think they're the de facto king of proprietary embeddings and with Mongo having bought them, I'd love to migrate away once someone can match their performance.
Hey Serjester Email me at elliott@cohere.ai, let's arrange time to chat. We did head to head evals with Voyage Large / Voyage Multimodal and I can share them with you if you are serious about moving your embeddings over. We tested configurations of top open-source, closed-source, multi-vector and single-dense embedding models but I can only choose so many to put on a graph and I'm not in the business of giving Voyage free advertising haha. I agree with you that there is some complexity on multi-modal reranking w.r.t to inference time speeds as well as data transfer / network latency costs. Happy to talk more :)
I messed up, I apologize.
I looked at the NDCG and thought that was the dataset.since voyage and cohere both used NDCG. I now realize it was separate benchmarks with the same evaluation metric.
Why? How do you pick an embedding model without benchmarks?
The comment by SparkyMcUnicorn worded it better than I did.
You’re right, there’s no other way to compare embeddings than a benchmark.
Just that what the benchmark used by Voyage and Cohere tracks might not be relevant to your own needs.