Amazing!
I just tried the OCR capabilities with a photo of a DIN A4 page which was written with a typewriter. The image isn't the easiest to interpret. The text perspective is distorted because the page is part of a book and the page margin toward the spine of the book is very small. There are also many inline corrections due to typing errors while the page was written (backspace couldn't erase characters back then, and arrow keys couldn't be used to add text in between existing words). Over the past months I've tried to use several LLMs on this very same image already (1 out of 200 pages that seek digitization). The result is by far the most accurate so far. Only some very minor errors (which are also non-trivial for human translators) were made.
This page induced costs of about 25 cent. I assume I could tweak the input image a little more to consume less input tokens. OCR-ing all 200 pages would otherwise cost a juicy 50$ - although there is a generous 20$ of free credits.
Induced cost: 108.8k Input tokens => 16,32 cent 24.5k Output tokens => 8,58 cent
// Edit: I just re-tried the same task utilizing a capability of the API to only run a specific part of the model (e.g. _only_ OCR). This cuts cost by 3x (to ~8c/page) but significantly worsens the result. The result is missing entire lines of the original document. There are also many error in the text that was recognized.
New account created ~5 hours after this post, with a single comment specifically praising the model / product. I want to believe, but this sort of astroturfing isn't very encouraging.
Yup run task mode runs a much smaller part of the model when can drop quality of scans. The issue with run task we have to figure out is how much of the model is needed just for OCR and how to activate the right parts. A lot more improvements coming here with the same cost reduction.
I'd be happy to test it against your sample and see how we can get good results at a lower per page cost. Feel free to email me yoeven@interfaze.ai
Have you tried this task using an actual OCR model like Google Cloud Vision AI? I am not sure if this is what Gemini uses under the hood but multi-modal LLMs are not designed to extract text like this so it should be no surprise it's not good at it?
Google Cloud Vision AI is a specialized model built on CNNs frameworks which is part of the Interfaze architecture which is an hybrid so you get best of both worlds. Google cloud vision was pretty far behind other specalized models like PaddleOCR etc anyways so if you're looking for a pure CNN, check them out.
You can find the explanation and the comparison in the article, which we benchmarked pure CNN models, pure LLM models and a hybrid architecture like ours.