I do not find so called "green threads" useful at all. In my opinion except some very esoteric cases they serve no purpose in "native" languages that have full access to all OS threading and IO facilities. Useful only in "deficient" environments like inherently single threaded request handlers like NodeJS.

Yeah I agree that user threads are overused by programmers. For most situations, using an OS thread is going to be far easier to work with. People like to cite the drawback that context switching overhead becomes a problem when you have thousands of threads, but the reality is that most people are not writing software that needs to handle many thousands of users all at once. Using green threads to handle such large scale instead of OS threads is a prime example of YAGNI.

Besides context switching, another issue with OS threads is that you need to carefully use synchronization primitives and watch for missing barriers. Green threads are a bit more forgiving.

Main point of real threads is that the are executed simultaneously on different cores (single core can of course still run few threads if needed), not by cooperative switching. Hence you need sync primitives so better get used to it. Having few threads on real cores can dramatically increase performance.

Green threads is just syntactic sugar around need to process multiple requests from a single real thread without blocking on io calls. It is somewhat helpful but it is a way around, not a generic recipe for performance increase and scalability

> Main point of real threads is that the are executed simultaneously on different cores

Sorry, that's about as far from truth as you can get. Threads were a thing way before multicore was available. Windows got them since Win95/NT4.0, and these didn't support multicore, nor did most of OpenVMS machines where NT thread model originated run multicore (were there any multicore VAXen at all). The two main points for threads back then was to simplify concurrent IO and to be able to run a background computation without hanging the UI. Unix was different because it was initially oriented towards timesharing aka. running a single program for user, so threads were seen as redundant complexity (as were async IO - so even today "I want to run something yet be able get more user input at any time" can sometimes be a PITA).

And year, not everything needs scalability. Scalability is actually actively harmful for everything that's not a server. E. g. UI needs to be responsive - but that doesn't necessary mean "run on multiple cores", rather "being able to preempt low-priority stuff". And usually you don't want to deal with multicore synchronization.

MCUs rarely have multiple cores. Yet multi-threading is ubiquitous, and on most RTOSes threads are much closer to "green threads" than Windows ones. Through as I keep screaming in every coroutine thread, I really wish C stackless coroutines were a thing, you can't afford a stack for everything.

There are more to this world than the boring AWS crud.

Among the other things I write stateful multithreaded high performance backends for enterprise. Processing thousands of requests per second. Database is local to backend and only for persistence. All state resides in RAM. In this environment green threads provide zero benefit.

Even if I was talking to remote database with slow access ability of green threads to "handle" bazillion requests will just overload the database. There are no miracles around the laws of physics.