This is actually a game changer. I’ve been meaning to want to run models to accomplish exactly this, but don’t have enough VRAM on my GPU for the conventional LLM-method for the most part. This seems to be a far more efficient method of accomplishing a more scoped problem. Thank you for making it open source!

Let me know if you have any questions! What hardware are you on?