Respectfully, this is also bullshit for my use case. For me, the M1 purchase was a step up compared to Intel; the rest is diminishing returns for now.
It’s also not true if you care about certain workloads like LLM performance. My biggest concern for example is memory size and bandwidth, and older chips compare quite favorably to new chips where “GPU VRAM size” now differentiates the premium market and becomes a further upsell, making it less cost-effective. :( I can justify $3k for “run a small LLM on my laptop for my job as ML researcher,” but I still can’t justify $10k for “run a larger model on my Mac Studio”
See https://github.com/ggml-org/llama.cpp/discussions/4167#discu...