Is it worth to keep your old CPU?

I still run a 6600 (65W peak) from 2016 as my daily driver. I have replaced the SSD once (MLC lasted 5 years, hopefully forever with SLC drive from 2011?), 2x 32GB DDR4 sticks (Kingston Micron lasted 8 years, with aliexpress "samsung" sticks for $50 a pop) and Monitor (Eizo FlexScan 1932 lasted 15! years RIP with Eizo RadiForce 191M, highly recommend with f.lux/redshift for exceptional quality of image without blue light)

It's still powerful enough to play any games released this year I throw at it at 60 FPS (with a low profile 3050 from 2024) let alone compile any bloat.

Keep your old CPU until it breaks, completely... or actually until the motherboard breaks; I have a Kaby Lake 35W replacement waititng for the 6600 to die.

You could buy a SSD with SLC cache that is bigger than in 2011 pure SLC one, and probably cheaper.

We have gone through this if you look at my comment history.

Yes you can do everything, but not without added complexity, that will end up failing faster.

We have peaked in all tech. Nothing will ever get as good as the raw peak in longevity:

- SSDs ~2011 (pure SLC)

- RAM ~2013 (DDR3 fast low latency but low Hz = cooler = lasts longer)

- CPUs ~2018 (debatable but I think those will outlast everything else)

I'd expect every new generation of computers to not last as long as the last one, we keep reducing the transistor size and that means more fragility. I'm half surprised modern GPUs make it through shipping without melting from static.

My guess is that the most long lived computer gen could be one that still uses through hole components. Not a very useful machine by any metric though I bet.

On the flip side, a lot of those old through-hole memory chips have failed. I'm not sure what the mechanism of action is, but it likely leads back to some kind of (at the time unknown) manufacturing defect. Every new generation requires higher purity and better quality control (because those tiny transistors are less tolerant of defects). If we optimized for longevity rather than flops per dollar or per watt, we would likely keep making the same hardware for a very long time, optimizing the process along the way and learning about the pitfalls. Maybe you can see such things in the military or industrial computing spaces.

What metric are you using to determine peak? Just long life?

I don't know about most people, but how long a wafer of silicon keeps working past its obsolescence, is just not that important

It is when we peak in nanometers too.

If you don't care about power efficiency, sure ;)

I bought a geforce RTX 3080 at launch and boy was I surprised at the power draw and heat/noise it pumps out. I wonder why anybody bothers with the 90 series at all.

I actually run it ~10% underclocked, barely affects performance, but greatly reduces heat/noise. These cards are configured to deliver maximum performance at any cost (besides system instability).

My next GPU I am probably going mid-range to be honest, these beefy GPUs are not worth it anymore cost and performance-wise. You are better off buying the cheaper models and upgrading more often.

> I bought a geforce RTX 3080 at launch and boy was I surprised at the power draw and heat/noise it pumps out. I wonder why anybody bothers with the 90 series at all.

More VRAM, and NVLink (on some models). You can easily run them at lower power limits. I've run CUDA workloads with my dual 3090s set as low as 170W to hit that sweet spot on the efficiency curve. You can actually go all the way down to 100W!

Oh well, for GPU programming sure as VRAM is king depending on task. But for gaming I won't go high end again.

I'm all in on 1030, the last passively coolable GPU.

But had to upgrade to 3050 because 2GB VRAM is to little for modern titles.

Fun fact: One 6600 core can saturate the 1030 for skinned mesh animations.

But only saturate the 3050 50% = perfect because the world takes much less CPU (you upload it to the GPU and then it's almost one drawcall; more like one drawcall per chunk but I digress) = OpenGL (ES) can render on it at lower motion-to-photon latency without waste = one core for rendering, one for gameplay, one for physics and one for the OS, audio and networking.

So 14nm 6600 + 8nm 3050 is actually the combination I would use for ever.

HL:A runs at 90 FPS with that combo too on the low res Vive 1.

Not that VR is going anywhere, but still peak demanding application.

When i bought the 3080ti, it really made me to rethink about my setup. I'm using space heater in an Air-conditioned room, wasting more energy for nothing.

Underclock it, it helps a lot.

[deleted]

When it comes to: "is it better to throw away the 6600 and replace it with a 5600, than keep running the 6600" I'm torn but: you probably need to use the 5600 for maybe 20 years to compensate for it's manufacturing energy cost (which is not directly linked to $ cost) and I think the 6600 might last that long with new RAM 10 years down the road, not so sure the newer 5600 motherboard and the CPU itself will make it that long becuse it was probably made to break to a larger extent.

Also the 6600 can be passively cooled in the Streacom case I allready have, the 5600 is to hot.

This is a little confusing because you were referring to a Core i5-6600 and presumably an i5-7500T or i7-7700T above but now you mention a 5600. Are you referring to a Ryzen 5 5600?

Yes

As we discussed elsewhere you could put the Ryzen 5 5600 into 45W ECO mode and cool it passively. Or a Ryzen 5 7600 if you decide to jump up to AM5 (which is probably a good idea even though you'll need new RAM).

As I probably replied there too I do not want to build hardware that can burn my house down if the wares are crap and decide to reset for some reason.

It will simply throttle.

Which is not good for longevity.

Really depends on your use case. I personally still run my 17 year old 2.4GHz Core 2 duo with 4GB of RAM as my daily runner. I simply do not hit the walls even on that thing. Most folks here simply would not accept that and not because they are spoilt but their works loads demand more.

a 17 year old core 2 duo machine is definitely less powerful than a raspberry pi 5 that is going to use 1/7th of the power.

Raspberry 5 is a dud, either go 4 or 3588 CM:

http://move.rupy.se/file/radxa_works.mp4

Or in a uConsole.

Also Risc-V tablet:

http://move.rupy.se/file/pinetab-v_2dmmo.png

Not as battle hardened as my Thinkpad X61s but maybe we'll get a GPU driver soon... 3 years later...

> Raspberry 5 is a dud

cf. the Pi 4: 2–3X CPU performance, full 5Gbps USB 3.0 ports, a PCIe Gen2 x1 connector, dual 4-lane MIPI connectors, support for A2-class SD cards, an integrated RTC...

A dud?? What's the issue? The price?

The performance per watt is only 1.5x so it's too hot.

3588 is waaaay more performant per watt, close to Apples M1.

The IO has been moved outside the SoC which causes alot of issues.

SD Card speeds are enough for client side use.

I do not know the specifics but a large issue with the 5 is that a lot of hardware acceleration for encoding and decoding video was removed, making it slower for anything to do with video.

This too.

You can only stream 720p out at 20 FPS from my 2711 though, so it only seems to decode well = watching/consuming media. (the future is producing)

The 2712 can stream out 720p at 40 FPS. (CPU)

The 3588 can stream out 720p at 60+ FPS. (CPU)

Edit: HL2 runs at the "same" FPS on each (sometimes 300+ on 3588)...

Ah, yes, it's definitely a poor choice for most video-encoding and some video-decoding use-cases. Just not sure how GP goes from that to "dud"...

Absolutely, but until I can get that easily in a battle harden Thinkpad design, I will probably still be using this. I aam not against upgrading at all, just haven't needed it yet. That said this last year, a lot of applications have finally grown to the point that I can see the horizon creeping closer.

I love those as routers, firewalls, and other random devices on my mess of a home network where I just set things up for fun. Or as little NASes for friends and family that I can give to them for free or whatever.

Nothing older than Nehalem and Bulldozer got microcode mitigations for Spectre, so I'd say running a C2D online would be a liability by now.

…IF anyone really bothered to develop exploits against something practically no one uses anymore at this point.

I am still waiting to hear about a proven exploit in the wild for all that stuff we’re mitigating against.

It very much depends on the games you play though. When I upgraded from a 7700k, it was struggling with my Factorio world. My AMD 5700X3D handles it completely smoothly. Though now in Path of Exile 2, my CPU can barely maintain 30 fps during big fights.

CPU is now the bottleneck for games that struggle, which makes sense since GPU most often is configurable, while gameplay well is the hardcoded gameplay.

See PUBG that has bloated Unreal so far past what any 4-core computer can handle because of anti-cheats and other incremental changes.

Factorio could add some "how many chunks to simulate" config then? If that does not break gameplay completely.

I upgraded two years ago to a Ryzen 5700 rather than a 5800 specifically for the lower TDP. I rarely max out the cores and the cooler system means the fan rarely spins up to audible levels.

Most (if not all?) BIOSes today will let you limit the TDP - on AMD it’s often called eco mode.

Cool, can you tune exactly how many watts or just on/off?

There are usually a few presets, e.g. 65W or 45W ECO modes for a 105W part, or you can set your own specific values for PPT/TDC/EDC.

Nice!

E5-2650v2 in a Chinese mATX motherboard for me. Got the cpu years ago for like $50 as an eBay server pull. 970 Evo SSD. 24GB of mismatched DDR3. Runs all my home server junk and my dev environment (containerized with Incus). Every year I tell myself I should grab a newer Ryzen to replace it but it honestly just keeps chugging along and doesn't really slow me down.

> Is it worth to keep your old CPU?

I just "re-cycle" them.

Bought a 7700X two years ago. My 3600X went to my wife. Previous machine (forgot which one it was but some Intel CPU) went to my mother-in-law. Machine three machines before that, my trusty old Core i7-6700K from 2015 (I think 2015): it's now a little Proxmox server at home.

I'll probably buy a 9900X or something now: don't want to wait late 2026/2027 for Zen 6 to come out. 7700X shall go to the wife, 3600X to the kid.

My machines typically work for a very long time: I carefully pick the components and assemble them myself and then test them. Usually when I pull the plug for the final time, it's still working fine.

But yet I like to be not too far behind: my 7700X from 2022 is okay. But I'll still upgrade. Doesn't mean it's not worth keeping: I'll keep it, just not for me.

Yep me too:

Thinkpad X61s(45nm) DDR2 / D512MO(45nm) DDR2 / 3770S(22nm) DDR3 / 4430S(22nm) DDR3

All still in client use.

All got new RAM this year and when the SSDs break (all have SLC) I have new SLC SSDs and will install headless linux for server duty on 1Gb/s symmertic fiber until the motherboards break in a way I can't repair. Will probably resolder caps.