> ollama benchmark ... for now, it's purely CPU, with DeepSeek R1 models tested based on the RAM available.

Then the results aren't comparable across different boards across RAM sizes. It'd be better to test a set of different model sizes on all and report -- if it didn't fit. But could you report the full ollama model name and version size slug for each?

> I pull Jeff's fork of the ollama-benchmark software

A link would be nice.

Hmm, I'm not sure if I'm missing something but that 1st comment is what I'm doing. I have 3 different sized Deepseek R1 models (1.5, 8, 16) and they run on each board that can handle them and then the data is reported.

For the 2nd, the file I grabbed initially was https://github.com/geerlingguy/ai-benchmarks/blob/main/obenc... - which I now notice wasn't modified in his repository, so I can check that out, but either way, the same version has been tested across everything thus far.