I wish developers, and I'm saying this as one myself, were forced to work on a much slower machine, to flush out those who can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

Yeah, I recognize this all too well. There is an implicit assumption that all hardware is top-tier, all phones are flagships, all mobile internet is 5G, everyone has regular access to free WiFi, etc.

Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.

Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.

If they get the latest hardware to build on the build itself will become slow too.

My biggest pet peeve is designers using high end apple displays.

You've average consumer is using a ultra cheap LCD panel that has no where near the contrast ratio that you are designing your mocks on, all of your subtle tints get saturated out.

This is similar to good audio engineers back in the day wiring up a dirt cheap car speaker to mix albums.

Those displays also have a huge resolution and eye-blindingly bright contrast by default, which is also how you get UI elements which are excessively large, tons of wasted space padding, and insanely low contrast.

> This is similar to good audio engineers back in the day wiring up a dirt cheap car speaker to mix albums.

Isn't that the opposite of what's happening?

I have decent audio equipment at home. I'd rather listen to releases that were mixed and mastered with professional grade gear.

Similarly, I'd like to get the most out of my high-end Apple display.

Optimizing your product for the lowest common denominator in music/image quality sounds like a terrible idea. The people with crappy gear probably don't care that much either way.

Ideally, you do both. Optimize on crap hardware, tweak on nice hardware.

They shouldn't work on a slower machine - however they should test on a slower machine. Always.

Even better is to measure real performance at your customers.

Yes!

> were forced to work on a much slower machine

I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.

Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.

Although, any good producer is going to listen to mixes in the car (and today, on a phone) to be sure they sound at least decent, since this is how many consumers listen to their music.

Yes, this is exactly my point :) Just like any good software developer who don't know exactly where their software will run, they test on the type of device that their users are likely to be running it with, or at least similar characteristics.

The car test has been considered a standard by mixing engineers for the past 4 decades

That's actually a good analogy. Bad speakers aren't just slow good speakers. If you try to mix through a tinny phone speaker you'll have no idea what the track will sound like even through halfway acceptable speakers, because you can't hear half of the spectrum properly. Reference monitors are used to have a standard to aim for that will sound good on all but the shittiest sound systems.

Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.

> can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)

However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.

But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.

When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.

Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...

> a much slower machine

Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.

I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.

[1]: https://news.ycombinator.com/item?id=44501119

Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.

Perhaps the better solution would be to have the fast machine but have a pseudo VM for just the software you are developing that uses up all of those extra resources with live analysis. The software runs like it is on a slower machine, but you could potentially gather plenty of info that would enable you to speed up the program for everyone.

Why complicated? Incentivize the shit out of it at the cultural level so they pressure their peers. This has gotten completely out of control.

Develop on a fast machine, test and optimise on a slow one?

The beatings will continue until the code improves.

I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.

For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.

I came here to say exactly this.

If developers are frustrated by compilation times on last-generation hardware, maybe take a critical look at the code and libraries you're compiling.

And as a siblimg comment notes, absolutely all testing should be on older hardware, without question, and I'd add with deliberately lower-quality and -speed data connections, too.

This is one of the things which cheeses me the most about LLVM. I can't build LLVM on less than 16GB of RAM without it swapping to high heaven (and often it just gets OOM killed anyways). You'd think that LLVM needing >16GB to compile itself would be a signal to take a look at the memory usage of LLVM but, alas :)

The thing that causes you run out of memory isn't actually anything in LLVM, it's all in ld. If you're building with debugging info, you end up pulling in all of the debug symbols for deduplication purposes during linking, and that easily takes up a few GB. Now link a dozen small programs in parallel (because make -j) and you've got an OOM issue. But the linker isn't part of LLVM itself (unless you're using lld), so there's not much that LLVM can do about it.

(If you're building with ninja, there's a cmake option to limit the parallelism of the link tasks to avoid this issue).

I'll agree with one modification: developers should be forced to test on a much slower machine.

My final compiled binary runs much faster than something written in, say, python or javascript, but my oh my is the rust compiler (and rust-analyzer) slow compared to the nonexistent compile steps in those other languages.

But for the most part the problem here isn't developers. It's product management and engineering managers. They just do not make performance a priority. Just like they often don't make bug-fixing and robustness a priority. It's all "features features features" and "time to market" and all that junk.

Maybe make the product managers use 5-year-old mid-range computers. Then when they test the stuff the developers have built, they'll freak out about the performance and prioritize it.

Efficiency costs development time and thus money. Computers getting faster is what made software development cheaper and possible to use for solving problems.

Usually software gets developed to be so fast that people just barely accept it with the computers of their time. You can do better by setting explicit targets like the RAIL model by google. Optimizing any further usually is just a waste of resources.

Contrarian here. I wish all product managers were forced to work on a much slower machine, so that "make this shit fast" becomes the highest priority issue in the backlog.

Nobody is writing slow code specifically to screw over users with old devices. They're doing it because it's the easiest way to get through their backlog of Other Things. As an example, it is a priority for a lot of competitive games, and they perform really well on everything from the latest 5090 to a pretty-old laptop integrated graphics GPU. It's done not because they only hired rockstar performance experts, but because it was a product priority.

i work as a mid-level engg with a 4+ yo dell (handed down to me when i joined), which is the same generic laptop that someone from admin team receives. some of my colleagues also share similar specs, and we were yet to be lucky to be given an upgrade.

might be down to the tech culture here, but we don't automatically write the most efficient code either. for a lot of simple projects, these "bad" machines are still capable enough unfortunately.

it's absolutely the wrong approach.

software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.

Assuming you build desktop software; you can build it on a beastly machine, but run it on a reasonable machine. Maybe local builds for special occasions, but it's special, you can wait.

Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.

I wish this hell on other developers, too. ;-)

Yeah but working with windows, visual studio and cooperate security software in a 8gb machine is just pain

Right, optimize for horrible tools so the result satisfies the bottom 20%. Counterpoint, id Software produced amazingly performant programs using top of the line gear. What are you trying to do is enforce a cultural norm by hobbling the programmer's hardware. If you want fast programs, you need to make that a criteria, slow hardware isn't going to get you there.