Bullshit. I shelled $3k for my MBP M1 back in 2021 and I intend to use it until I can’t anymore.
It depends on the person and the use case. Different personalities etc
Bullshit. I shelled $3k for my MBP M1 back in 2021 and I intend to use it until I can’t anymore.
It depends on the person and the use case. Different personalities etc
That's not particularly rational given how quickly computers progress in both performance and cost, a current-gen $1k Macbook Air will run circles around your M1. You'd probably be much better off spending the same amount of money on cheaper machines with a more frequent upgrade cadence. And you can always sell your old ones on eBay or something.
There are other factors to consider such as screen size, storage and RAM, connectivity and ports, active versus passive cooling (thermal throttling), and speaker quality. Additionally, the M1 Pro GPU benchmarks still outperform the latest M4 Air.
For example if I spec out a 13" M4 MBA to match my current 14" M1 Pro MBP, which with tax came to ~$3k in 2021 (32GB RAM, 1TB storage), that $1k MBA ends up being ~$1900. Now that more frequent upgrade cadence doesn't make as much sense financially. After one purchase and one upgrade, you've exceeded the cost of the M1 Pro MBP purchase.
Overall I don't disagree with your sentiment, especially for more casual use cases, but progress will never stop. There will always be a newer laptop with better specs coming out. I personally would rather beef up a machine and then drive it until it dies or can no longer perform the tasks I need it to.
i like using computers until they break on me, i've never really felt (for the usage i give my macbook) that it is lacking in power. Even after, what, 5 years?
i think i'll be upgrading in the next 2 or maybe 3 years if apple puts OLED screens on their new machines as it is rumored.
Respectfully, this is also bullshit for my use case. For me, the M1 purchase was a step up compared to Intel; the rest is diminishing returns for now.
It’s also not true if you care about certain workloads like LLM performance. My biggest concern for example is memory size and bandwidth, and older chips compare quite favorably to new chips where “GPU VRAM size” now differentiates the premium market and becomes a further upsell, making it less cost-effective. :( I can justify $3k for “run a small LLM on my laptop for my job as ML researcher,” but I still can’t justify $10k for “run a larger model on my Mac Studio”
See https://github.com/ggml-org/llama.cpp/discussions/4167#discu...