But all those problems exist when rendering into a surface or rastering. I just don't understand how one thinks, this is a hard problem, let me make it harder by solving the problem into another kind of problem that is just as hard as solving it in the first place (PDF to structured data vs PDF to raster). And then solve the new problem, which is also hard. It is absurd.

The problems don't actually exist in the way you think.

When extracting text directly, the goal is to put it back into content order, regardless of stream order. Then turn that into a string. As fast as possible.

That's straight text. if you want layout info, it does more. But it's also not just processing it as a straight stream and rasterizing the result. It's trying to avoid doing that work.

This is non-trivial on lots of pdfs, and a source of lots of parsing issues/errors because it's not just processing it all and rasterizing it, but trying to avoid doing that.

When rasterizing, you don't care about any of this at all. PDFs were made to raster easily. It does not matter what order the text is in the file, or where the tables are, because if you parse it straight through, raster, and splat it to the screen, it will be in the proper display order and look right.

So if you splat it onto the screen, and then extract it, it will be in the proper content/display order for you. Same is true of the tables, etc.

So the direct extraction problems don't exist if you can parse the screen into whatever you want, with 100% accuracy (and of course it doesn't matter if you use AI or not to do it).

Now, i am not sure i would use this method anyway, but your claim that the same problems exist is definitely wrong.

I don’t think people are suggesting : Build a renderer > build an ocr pipeline > run it on pdfs

I think people are suggesting : Use a readymade renderer > use readymade OCR pipelines/apis > run it on pdfs

A colleague uses a document scanner to create a pdf of a document and sends it to you

You must return the data represented in it retaining as much structure as possible

How would you proceed? Return just the metadata of when the scan was made and how?

Genuinely wondering

You can use an existing readymade renderer to render into structured data instead of raster.

Just to illustrate this point, poppler [1] (which is the most popular pdf renderer in open source) has a little tool called pdf2cairo [2] which can render a pdf into a svg. This means you can delegate all pdf rendering to poppler and only work with actual graphical objects to extract semantics.

I think the reason this method is not popular is that there are still many ways to encode a semantic object graphically. A sentence can be broken down into words or letters. Table lines can be formed from multiple smaller lines, etc. But, as mentioned by the parent, rule based systems works reasonably well for reasonably focused problems. But you will never have a general purpose extractor since rules needs to be written by humans.

[1] https://poppler.freedesktop.org/ [2] https://gitlab.freedesktop.org/poppler/poppler/-/blob/master...

There is also PDF to HTML, PDF to Text, MuPDF also has PDF to XML, both projects along with a bucketful of other PDF toolkits have PDF to PS, and there is many many XML, HTML, and Text outputs for PS.

Rastering and OCR'ing PDF is like using regex to parse XHTML. My eyes are starting to bleed out, I am done here.

It looks like you make a lot of valid points, but also have an extremely visceral reaction because theres a company out there thats using AI in a way that offends you. I mean fair still.

But im a guy who's in the market for a pdf parser service, im happy to pay pretty penny per page processed. I just want a service that works without me thinking for a second about any of the problems you guys are all discussing. What service do I use? Do I care if it uses AI in the lamest way possible? The only thing that matters are the results. There are two people including you in this thread ramming with pdf parsing gyan but from reading it all, it doesn't look like I can do things the right way without spending months fully immersed in this problem alone. If you or anyone has a non blunt AI service that I can use Ill be glad to check it out.

It is a hard problem, yes, but you don't solve it by rastering it, OCR, and then using AI. You render it into a structured format. Then at least you don't have to worry about hallucinations, fancy fonts OCR problems, text shaping problems, huge waste of GPU and CPU to paint an image only to OCR it and throw it away.

Use a solution that renders PDF into structured data if you want correct and reliable data.

pdftotext from poppler has that without doing juggling with formats.

Sometimes scanned documents are structured really weird, especially for tables. Visually, we can recognize the intention when it's rendered, and so can the AI, but you practically have to render it to recover the spatial context.

But why do you have to render it into bitmap?

PDF to raster seems a lot easier than PDF to structured data, at least in terms of dealing with the odd edge cases. PDF is designed to raster consistently, and if someone generates something that doesn't raster in enough viewers, they'll fix it. PDF does not have anything that constrains generators to a sensible structured representation of the information in the document, and most people generating PDF documents are going to look at the output, not run it through a system to extract the structured data.