I'd wager that's more likely due to Windows than the hardware. Like sure the hardware does play a part in that but its not the whole story or even most of it.

My C++ projects have a python heavy build system attached where the main script that runs to prepare everything and kick off the build, takes significantly longer to run on Windows than Linux on the same hardware.

Afaik a lot of it is ntfs. It’s just so slow with lots of small files. Compare unzipping moderately large source repos on windows vs. POSIX, it’s day and night.

No, it’s not NTFS, it’s the file system filter architecture of the NT kernel.

I had internalised that it was Windows Defender hooking every file operation and checking it against a blacklist? I've had it forced off for years.

Just deleting 40,000 files from the node_modules of a modest Javascript project can thoroughly hammer NTFS.

I think part of that is Explorer, rather than NTFS. Try doing it from the console instead. rd /q /s <dir>.

It still takes a lot longer than Linux or Mac OS X.

NTFS is definitely slower to modify file system structures than ext4.

A big part of it is that NT has to check with the security manager service every time it does a file operation.

The original WSL for instance was a very NT answer to the problem of Linux compatibility: NT already had a personality that looked like Windows 95, just make one that looks like Linux. It worked great with the exception of the slow file operations which I think was seen as a crisis over Redmond because many software developers couldn’t or wouldn’t use WSL because of the slow file operations affecting many build systems. So we got the rather ugly WSL2 which uses a real Linux filesystem so the files perform like files on Linux.

I don't know about ugly. Virtualization seems like a more elegant solution to the problem, as I see it. Though it also makes WSL pointless; I don't get why people use it instead of just using Hyper-V.