I convert the PDF into an image per page, then dump those images into either an OCR program (if the PDF is a single column) or a vision-LLM (for double columns or more complex layouts).
Some vision LLMs can accept PDF inputs directly too, but you need to check that they're going to convert to images and process those rather than attempting and failing to extract the text some other way. I think OpenAI, Anthropic and Gemini all do the images-version of this now, thankfully.
If you don't have a known set of PDF producers this is really the only way to safely consume PDF content. Type 3 fonts alone make pulling text content out unreliable or impossible, before even getting to PDFs containing images of scans.
I expect the current LLMs significantly improve upon the previous ways of doing this, e.g. Tesseract, when given an image input? Is there any test you're aware of for model capabilities when it comes to ingesting PDFs?
I've been trying it informally and noting that it's getting really good now - Claude 4 and Gemini 2.5 seem to do a perfect job now, though I'm still paranoid that some rogue instruction in the scanned text (accidental or deliberate) might result in an inaccurate result.
Sadly this makes some sense; pdf represents characters in the text as offsets into it's fonts, and often the fonts are incomplete fonts; so an 'A' in the pdf is often not good old ASCII 65. In theory there's two optional systems that should tell you it's an 'A' - except when they don't; so the only way to know is to use the font to draw it.