This is really cool! I am trying to find a way to accelerate LLM inference for PII detection purposes, where speed is really necessary as we want to process millions of log lines per minute, I am wondering how fast we could get e.g. llama 3.1 to run on a conventional NVIDIA card? 10k tokens per second would be fantastic but even at 1k this would be very useful.
For that you only need high throughput which is much easier to achieve compared to high latency, thanks to batching -- assuming the log lines or chunks can be processed independently. You can check TensorRT-LLM benchmarks (https://nvidia.github.io/TensorRT-LLM/developer-guide/perf-o...), or try running vllm on a card you have access to.
PII redaction is a really good use-case.
Also, "10k tokens per second would be fantastic" might not be sufficient (even remotely) if you want to "process millions of log lines per minute".
Assuming a single log line at just 100 tokens, you need (100 * 2 million / 60) ~ 3.3 million tokens per second processing speed :)
Yeah I mean we have a mechanism that can bypass AI models for log lines where we are pretty sure no PII is in there (kind of like smart caching using fuzzy template matching to identify things that we have seen before many times, as logs tend to contain the same stuff over and over with tiny variations e.g. different timestamps), so we only need to pass the lines where we cannot be sure there's nothing to the AI for inspection. And we can of course parallelize. Currently we use a homebrew CFR model with lots of tweaks and it's quite good but an LLM would of course be much better still and capture a lof of cases that would evade the simpler model.