Module-level initialization has one huge problem in Python, though.
That means that as soon as you import a module, initialization happens. Ad infinitum, and you get 0.5s or more import times for libraries like sqlalchemy, requests...
Module-level initialization has one huge problem in Python, though.
That means that as soon as you import a module, initialization happens. Ad infinitum, and you get 0.5s or more import times for libraries like sqlalchemy, requests...
> Ad infinitum
The result of module import is cached (which is also what makes it valid to use a module as a singleton); you do not pay this price repeatedly. Imports can also be deferred; `import` is an ordinary statement in Python which takes effect at runtime rather than compile time.
Modules that are slow to import are usually slow because of speculatively importing a large tree of sub-modules. Otherwise it's because there's actual work being done in the top-level code, which is generally unavoidable.
(Requests is "only" around a .1s import on my 11-year-old hardware. But yes, that is still pretty big; several times as long as the Python interpreter plus the default modules imported automatically at startup.)
Initialization only happens once, when you import the module for the first time, afaik. Unless you are running multiple Python processes, that is.
yep, the legacy codebase I maintain does a lot of this kind of stuff and has made it difficult to write unit tests in some cases due to all the code that runs at import and all the state we end up with
The article addresses this.
I know, I'm just complaining about the mountain of code that does this at my company. And there is no fixing it using the article's approach or any other for that matter due to the sheer scale of the abuse.