> Who knows? What I know is that I need >64GB of RAM to run local models, and that means most people will need to upgrade from their 8Gb/16GB setup to do the same. Graphics cards follow mostly the same pattern.

Depends how big the models are, how fast you want them to run and how much context you need for your usage. If you're okay with running only smaller models (which are still very capable in general, their main limitation is world knowledge) making very simple inferences at low overall throughput, you can just repurpose the RAM, CPUs/iGPUs and storage in the average setup.