At my work we have a jit compiler that requires type hints under some conditions.
Aside from that, I avoid them as much as possible. The reason is that they are not really a part of the language, they violate the spirit of the language, and in high-usage parts of code they quickly become a complete mess.
For example a common failure mode in my work’s codebase is that some function will take something that is indexable by ints. The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types! And you better believe that every single admissible type will eventually be fed to this function. Sometimes people will try to keep up, annotating it with a Union of the (growing) list of admissible types, but eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
So, if the jit compiler needs the annotation I am happy to provide it, but otherwise I will proactively not provide any, and I will sometimes even delete existing annotations when they are devolving into silliness.
That's the same complaints people had about TypeScript in the beginning, when libraries such as Express used to accept a wide range of input options that would be a pain to express in types properly. If you look at where the ecosystem is now, though, you'll see proper type stubs, and most libraries get written in TS in the first place anyway. When editing TS code, you get auto-completion out of the box, even for deeply nested properties or conditional types. You can rely on types being what the compiler says they are, and runtime errors are a rarity now (in properly maintained code bases).
> The reason is that they are not really a part of the language, they violate the spirit of the language, and in high-usage parts of code they quickly become a complete mess.
I'll admit that this is what I hate Python, and it's probably this spirit of the language as you call it. I never really know what parameters a function takes. Library documentation often shows a few use cases, but doesn't really provide a reference; so I end up having to dig into the source code to figure it out on my own. Untyped and undocumented kwargs? Everywhere. I don't understand how someone could embrace so much flexibility that it becomes entirely undiscoverable for anyone but maintainers.
Because the flexibility has been a boon and not a problem. The problem only comes when you try to express everything in the type system, that is third party (the type checkers for it) and added on top.
It's a boon if the goal is to write code then go home. It's a loaded footgun if the goal is to compose a stack and run it in production within SLO.
Python type hints manage to largely preserve the flexibility while seriously increasing confidence in the correctness, and lack of crashing corner cases, of each component. There's really no good case against them at this point outside of one-off scripts. (And even there, I'd consider it good practice.)
As a side bonus, lack of familiarity with Python type hints is a clear no-hire signal, which saves a lot of time.
I think with types there is a risk of typing things too early or too strictly or types nudging one to go in a direction, that reduces the applicability and flexibility of the final outcome. Some things can be difficult to express in types and then people choose easier to type solutions, that are not as flexible and introduce more work later, when things need to change, due to that inflexibility or limited applicability.
People say this all the time, but I've never seen any data proving it's true. Should be rather easy too, I'm at a big company and different teams use different languages. The strictly typed languages do to have fewer defects, and those teams don't ship features any faster than the teams using loosely typed languages.
What I've experienced is that other factors make the biggest difference. Teams that write good tests, have good testing environments, good code review processes, good automation, etc tend to have fewer defects and higher velocity. Choice of programming language makes little to no difference.
I work at big tech and the number of bad deploys and reverts I've seen go out due to getting types wrong is in the hundreds. Increased type safety would catch 99% of the reverts I've seen.
> Because the flexibility has been a boon and not a problem
Well, you could say that the problem in this case was the lack of documentation, if you wanted. The type signature could be part of the documentation, from this point of view.
Let me give a kind-of-concrete example: one year I was working through a fast.ai course. They have a Python layer above the raw ML stuff. At the time, the library documentation was mediocre: the code worked, there were examples, and the course explained what was covered in the course. There were no type hints. It's free (gratis), I'm not complaining. However, once I tried making my own things, I constantly ran into questions about "can this function do X" and it was really hard to figure out whether my earlier code was wrong or whether the function was never intended to work with the X situation. In my case, type hints would have cleared up most of the problems.
If the code base expects flexibility, trusting documentation is the last thing you'd want to do. I know some people live and die by the documentation, but that's just a bad idea when duck typing or composition is heavily used for instance, and documentation should be very minimal in the first place.
When a function takes a myriad of potential input, "can this function do X" is an answer you get by reading the function or the tests, not the prose on how it was intended 10 years ago or how some other random dev thinks it works.
Documentation doesn’t have to be an essay. A simple, automatically generated reference with proper types goes a long way to tell me „it can do that“ as opposed to „maybe it works lol“. That’s not the level of engineering quality I’m going for in my work.
This whole discussion is about how you might not want to be listing every single types a function accepts. I also kinda wonder how you automatically generate that for duck typing.
from typing import Protocol
class SupportsQuack(Protocol):
def quack(self) -> None: ...
This of course works with dunder methods and such. Also you can annotate with @runtime_checkable (also from typing) to make `isinstance`, etc work with it
You're then creating a Protocol for every single function that could rely on some duck typing.
Imagine one of your function just wants to move an iterator forward, and another just wants the current position. You're stuck with either requiring a full iterator interface when only part of it is needed or create one protocol for each function.
In day to day life that's dev time that doesn't come back as people are now spending time reading the protocol spaghetti instead of reading the function code.
I don't deny the usefulness of typing and interfaces in stuff like libraries and heavily used common components. But that's not most of your code in general.
For the collections case in particular, you can use the ABCs for collections that exist already[1]. There's probably in your use case that satisfies those. There's also similar things for the numeric tower[2]. SupportsGE/SupportsGT/etc should probably be in the stdlib but you can import them from typeshed like so
from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from _typeshed import SupportsGT
---
In the abstract sense though, most code in general can't work with anything that quack()s or it would be incorrect to. The flip method on an penguin's flipper in a hypothetical animallib would probably have different implications than the flip method in a hypothetical lightswitchlib.
Or less by analogy, adding two numbers is semantically different than adding two tuples/str/bytes or what have you. It makes sense to consider the domain modeling of the inputs rather than just the absolute minimum viable to make it past the runtime method checks.
But failing that, there's always just Any if you legitimately want to allow any input (but this is costly as it effectively disables type checking for that variable) and is potentially an indication of some other issue.
> You're then creating a Protocol for every single function that could rely on some duck typing.
No, you are creating a Protocol (the kind of Python type) for every protocol (the descriptive thing the type represents) that is relied on for which an appropriate Protocol doesn’t already exist. Most protocols are used in more than one place, and many common ones are predefined in the typing module in the standard library.
Except Typescript embraces duck typing. You can say "accept any object with a quack() method", for example, and it'll accept an unexpected quacking parrot. It can even tell when two type definitions are close enough and merge them.
I like Python a lot, and have been using it for personal projects since about 2010. It was only once I started working and encountering long-lived unfamiliar Python codebases regularly that I understood the benefits of type hints. It's not fun to have to trace through 5 or 6 different functions to try to figure out what type is being passed in or returned from something. It's even less fun to find out that someone made a mistake and it's actually two different incompatible things depending on the execution path.
That era of Python codebases were miserable to work in, and often ended up in the poorly though out "we don't know how this works and it has too many bugs, let's just rewrite it" category.
> It's not fun to have to trace through 5 or 6 different functions to try to figure out what type is being passed in or returned from something.
My position is that what is intended must be made clear between type hints and the docstring. Skipping this makes for difficult to read code and has no place in a professional setting in any non-trivial codebase.
This doesn't require type hints to achieve. :param and :rtype in the docstring are fine if type hints aren't present, or for complex cases, plain English in the docstring is usually better.
:param and :rtype are type hints, just type hints that cannot be validated by tooling and are guaranteed to go out of sync with the code eventually.
Proper type hints are typically very easy to add if the codebase is not a mess that passes things around far and wide with no validation. If it is, the problem is not with the type hints.
I agree, although I've found that correct and comprehensive use of the doctoring for this purpose has not existed in the environments I've worked in, or the open source codebases I have needed to understand. Something about type hinting makes people more likely to do it.
I am sorry, but whats wrong with doing something like, `print(type(var)); exit()` and just running it once instead of digging through 5-6 stack frames?
Sometimes a function's input or return type can vary depending on the execution path? Also, inserting print statements is often not practical when working on web backend software which is kind of a big thing nowadays. If you can run the service locally, which is not a given, dependencies get mocked out and there's no guarantee that your code path will execute or that the data flowing through it will be representative.
They don’t violate the spirit of the language. They are optional. They don’t change the behaviour at runtime.
Type annotations can seem pointless indeed if you are unwilling to learn how to use them properly. Using a giant union to type your (generic) function is indeed silly, you just have to make that function generic as explained in another comment or I guess remove the type hints
- There is one obvious way to provide type hints for your code, it’s to use the typing module provided by the language which also provides syntax support for it.
- You don’t have to use it because not all code has to be typed
- You can use formatted strings, but you don’t have to
- You can use comprehensions but you don’t have to
- You can use async io, but you don’t have to. But it’s the one obvious way to do it in python
The obvious way to annotate a generic function isn’t with a giant Union, it’s with duck typing using a Protocol + TypeVar. Once you known that, the obvious way is… pretty obvious.
The obvious way not be bothered with type hints because you don’t like them is not to use them!
Python is full of optional stuff, dataclasses, named tuples, meta programming, multiple ancestor inheritance.
You dont have to use these features, but there are only one way to use them
"There should only be one way to do it" has not really been a thing in Python for at least the last decade or longer. It was originally meant as a counterpoint to Perl's "there's more than one way to do it," to show that the Python developers put a priority on quality and depth of features rather than quantity.
But times change and these days, Python is a much larger language with a bigger community, and there is a lot more cross-pollination between languages as basic philosophical differences between the most popular languages steadily erode until they all do pretty much the same things, just with different syntax.
> "There should only be one way to do it" has not really been a thing in Python for at least the last decade or longer.
It never was a thing in Python, it is a misquote of the Zen of Python that apparently became popular as a reaction against the TMTOWTDI motto of the Perl community.
How so? There is one way to do it. If you want typing, you use type hints. You wouldn't say that, say, functions are unpythonic because you can either use functions or not use them, therefore there's two ways to do things, would you?
And Python failed at that decades ago. People push terribly complicated, unreadable code under the guise of Pythonic. I disagree with using Pythonic as reasoning for anything.
This is a popular misquote from the Zen of Python. The actual quote is “There should be one—and preferably only one—obvious way to do it.”
The misquote shifts the emphasis to uniqueness rather than having an obvious way to accomplish goals, and is probably a result of people disliking the “There is more than one way to do it” adage of Perl (and embraced by the Ruby community) looking to the Zen to find a banner for their opposing camp.
And Python failed at that decades ago. People push terribly complicated, unreadable code under the guise of Pythonic. I disagree with using Pythonic as reasoning for anything.
Actually in Python it can. Since the type hints are accessible at runtime, library authors can for example change which values in kwargs are allowed based on the type of the argument.
So on the language level it doesn’t directly change the behavior, but it is possible to use the types to affect the way code works, which is unintuitive. I think it was a bad decision to allow this, and Python should have opted for a TypeScript style approach.
You can make it change the behaviour at runtime is different than it changes the behaviour at runtime I think?
Lots of very useful tooling such as dataclasses and framework like FastAPI rely on this and you're opinion is that it's a bad thing why?
In typescript the absence of type annotations reflection at runtime make it harder to implement things that people obviously want, example, interop between typescript and zod schemas. Zod resorts instead to have to hook in ts compiler to do these things.
I'm honestly not convinced Typescript is better in that particular area.
What python has opted for is to add first class support for type annotations in the language (which Javascript might end up doing as well, there are proposals for this, but without the metadata at runtime).
Having this metadata at runtime makes it possible to implement things like validation at runtime rather than having to write your types in two systems with or without codegen (if Python would have to resort to codegen to do this, like its necessary in typescript, I would personally find this less pythonic).
I think on the contrary it allows for building intuitive abstractions where typescript makes them harder to build?
Yeah, but then you get into the issues with when and where generic types are bound and narrowed, which can then make it more complicated, at which point one might be better off stepping back, redesigning, or letting go of perfect type hint coverage, for dynamic constructs, that one couldn't even write in another type safe language.
I don’t know anything about your jit compiler, but generally the value I get from type annotations has nothing to do with what they do at runtime. People get so confused about Python’s type annotations because they resemble type declarations in languages like C++ or Java. For the latter, types tell the compiler how to look up fields on, and methods that apply to, an object. Python is fine without that.
Python’s types are machine-checkable constraints on the behavior of your code.. Failing the type checker isn’t fatal, it just means you couldn’t express what you were doing in terms it could understand. Although this might mean you need to reconsider your decisions, it could just as well mean you’re doing something perfectly legitimate and the type checker doesn’t understand it. Poke a hole in the type checker using Any and go on with your day. To your example, there are several ways described in comments by me and others to write a succinct annotation, and this will catch cases where somebody tries to use a dict keyed with strings or something.
Anyway, you don’t have to burn a lot of mental energy on them, they cost next to nothing at runtime, they help document your function signatures, and they help flag inconsistent assumptions in your codebase even if they’re not airtight. What’s not to like?
So the type is anything that implements the index function ([], or __getitem__), I thnink that's a Sequence, similar to Iterable.
>from typing import Sequence
>def third(something: Sequence):
> return indexable[3]
however if all you are doing is just iterate over the thing, what you actually need is an Iterable
>from typing import Iterable
>def average(something:Iterable):
> for thing in something:
> ...
Statistically, the odds of a language being wrong, are much lower than the programmer being wrong. Not to say that there aren't valid critiques of python, but we must think of the creators of programming languages and their creations as the top of the field. If a 1400 chess elo player criticizes Magnus Carlsen's chess theory, it's more likely that the player is missing some theory rather than he found a hole in Carlsen's game, the player is better served by approaching a problem with the mentality that he is the problem, rather than the master.
> So the type is anything that implements the index function ([], or __getitem__), I thnink that's a Sequence
Sequence involves more than just __getitem__ with an int index, so if it really is anything int indexable, a lighter protocol with just that method will be more accurate, both ar conveying intent and at avoiding needing to evolve into an odd union type because you have something that a satisfies the function’s needs but not the originally-defined type.
That is sort of ironic because the Pythonistas did not leave out any opportunity to criticize Java. Java was developed by world class experts like Gosling and attracted other type experts like Philip Wadler.
No world class expert is going to contribute to Python after 2020 anyway, since the slanderous and libelous behavior of the Steering Council and the selective curation of allowed information on PSF infrastructure makes the professional and reputational risk too high. Apart from the fact that Python is not an interesting language for language experts.
Google and Microsoft have already shut down several failed projects.
I get the idea that Python and Java went in opposite directions. But I'm not aware of any fight between both languages. I don't think that's a thing either.
Regarding stuff that happens in the 2020. Python was developed in the 90s, python 3 was launched in 2008. Besides some notable PEPs like type hints, WSGI, the rest of development are footnotes. The same goes for most languages (with perhaps the exception of the evergrowing C++), languages make strong bc guarantees and so the bulk of their innovation comes from the early years.
Whatever occurs in the 20th and 30th year of development is unlikely to be revolutionary or very significant. Especially ignoreable is the drama that might emerge in these discussions, slander, libel inter-language criticism?
Just mute that out. I've read some news about some communities like Ruby on Rails or Nix that become overtaken by people and discussions of political nature rather than development, they can just be ignored I think.
It’s unlikely those layoffs are related to that, but rather the industry at large and end of zirp. Those type of folks are common in bigtech companies as well.
For example the dart/flutter team was decimated as well.
it was a sin that python's type system was initially released as a nominal type system. they should have been the target from day one.
being unable to just say "this takes anything that you can call .hello() and .world() on" was ridiculous, as that was part of the ethos of the dynamically typed python ecosystem. typechecking was generally frowned upon, with the idea that you should accept anything that fit the shape the receiving code required. it allowed you to trivially create resource wrappers and change behaviors by providing alternate objects to existing mechanisms. if you wanted to provide a fake file that read from memory instead of an actual file, it was simple and correct.
the lack of protocols made hell of these patterns for years.
I disagree. I think, if the decision was made today, it probably would have ended up being structural, but the fact that it isn't enables (but doesn't necessarily force) Python to be more correct than if it weren't (whereas forced structural typing has a certain ceiling of correctness).
Really it enabled the Python type system to work as well as it does, as opposed to TypeScript, where soundness is completely thrown out except for some things such as enums
Nominal typing enables you to write `def ft_to_m(x: Feet) -> Meters: and be relatively confident that you're going to get Feet as input and Meters as output (and if not, the caller who ignored your type annotations is okay with the broken pieces).
The use for protocols in Python in general I've found in practice to be limited (the biggest usefulness of them come from the iterable types), when dealing with code that's in a transitional period, or for better type annotations on callables (for example kwargs, etc).
TypeScript sacrificed soundness to make it easier to gradually type old JS code and to allow specific common patterns. There is no ceiling for correctness of structural typing bar naming conflicts.
AFAIK, Python is missing a fully-featured up to date centralized documentation on how to use type annotations.
The current docs are "Microsoft-like", they have everything, spread through different pages, in different hierarchies, some of them wrong, and with nothing telling you what else exists.
> That's your problem right there. Why are random callers sending whatever different input types to that function?
Because it’s nice to reuse code. I’m not sure why anyone would think this is a design issue, especially in a language like Python where structural subtyping (duck typing) is the norm. If I wanted inheritance soup, I’d write Java.
Ironically, that’s support for structural subtyping is why Protocols exist. It’s too bad they aren’t better and the primary way to type Python code. It’s also too bad that TypedDict actively fought duck typing for years.
Because it’s nice to reuse code. It’s virtually never the case that a function being compatible with too many types is an issue. The issue is sometimes that it isn’t clear what types will be compatible with a function, and people make mistakes.
Python’s type system is overall pretty weak, but with any static language at least one of the issues is that the type system can’t express all useful and safe constructs. This leads to poor code reuse and lots of boilerplate.
>It’s virtually never the case that a function being compatible with too many types is an issue
This kind of accidental compatibility is a source of many hard bugs. Things appear to work perfectly, then at some point it does something subtly different, until it blows up a month later
> Why are random callers sending whatever different input types to that function?
Probably because the actual type it takes is well-understood (and maybe even documented in informal terms) by the people making and using it, but they just don’t understand how to express it in the Python type system.
> For example a common failure mode in my work’s codebase is that some function will take something that is indexable by ints. The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types! And you better believe that every single admissible type will eventually be fed to this function. Sometimes people will try to keep up, annotating it with a Union of the (growing) list of admissible types, but eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
You are looking for protocols. A bit futzy to write once but for a heavily trafficked function it's woth it.
If your JIT compiler doesn't work well with protocols... sounds like a JIT problem not a Python typing problem
In my experience, the right tooling makes Python typing a big win. Modern IDEs give comprehensive real-time feedback on type errors, which is a big productivity boost and helps catch subtle bugs early (still nowhere near Rust, but valuable nonetheless). Push it too far though, and you end up with monsters like Callable[[Callable[P, Awaitable[T]]], TaskFunction[P, T]]. The art is knowing when to sprinkle types just enough to add clarity without clutter.
On the far end of this debate you end up with types like _RelationshipJoinConditionArgument which I'd argue is almost more useless than no typing at all. Some people claim it makes their IDE work better, but I don't use an IDE and I don't like the idea of doing extra work to make the tool happy. The opposite should be true.
Yes it is. I believe the reason is that this is all valid python while typescript is not valid javascript. Also, python's type annotations are available at runtime (eg. for introspection) while typescript types aren't.
That said, typescript static type system is clearly both more ergonomic and more powerful than Python's.
I feel pretty similarly on this. Python’s bolted on type system is very poor at encoding safe invariants common in the language. It’s a straight jacketed, Java-style OOP type system that’s a poor fit for many common Python patterns.
I would love it if it were better designed. It’s a real downer that you can’t check lots of Pythonic, concise code using it.
It sounds like that function is rightfully eligible to be ignored or to use the Any designation. To me that's why the system is handy. For functions that have specific inputs and outputs, it helps developers keep things straight and document code.
I give Rust a lot of points for putting control over covariance into the language without making anyone remember which one is covariance and which one is contravariance.
One of the things that makes typing an existing codebase difficult in Python is dealing with variance issues. It turns out people get these wrong all over the place in Python and their code ends up working by accident.
Generally it’s not worth trying to fix this stuff. The type signature is hell to write and ends up being super complex if you get it to work at all. Write a cast or Any, document why it’s probably ok in a comment, and move on with your life. Pick your battles.
Whether or not you explicitly write out the type, I find that functions with this sort of signature often end up with code that checks the type of the arguments at runtime anyway. This is expensive and kind of pointless. Beware of bogus polymorphism. You might as well write two functions a lot of the time. In fact, the type system may be gently prodding you to ask yourself just what you think you’re up to here.
This is really just the same mistake as the original expanding union, but with overly narrow abstract types instead of overly narrow concrete types. If it relies on “we can use indexing with an int and get out something whose type we don’t care about”, then its a Protocol with the following method:
def __getitem__(self, i: int, /) -> Any: ...
More generally, even if there is a specific output type when indexing, or the output type of indexing can vary but in a way that impacts the output or other input types of the function, it is a protocol with a type parameter T and this method:
def __getitem__(self, i: int, /) -> T: ...
It doesn’t need to be union of all possible concrete and/or abstract types that happen to satisfy that protocol, because it can be expressed succinctly and accurately in a single Protocol.
As of Python 3.12, you don’t need separately declared TypeVars with explicit variance specifications, you can use the improved generic type parameter syntax and variance inference.
So, just:
class Indexable[T](Protocol):
def __getitem__(self, i: int,/) -> T: ...
> eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
No, this is the great thing about gradual typing! You can use it to catch errors and provide IDE assistance in the 90% of cases where things have well-defined types, and then turn it off in the remaining 10% where it gets in the way.
Define a protocol[0] that declares it implements `__getitem__` and type annotate with that protocol. Whatever properties are needed inside the function can be described in other protocols.
These are similar to interfaces in C# or traits in Rust - you describe what the parameter _does_ instead of what it _is_.
>The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types!
That's not how you are supposed to use static typing? Python has "protocols" that allows for structural type checking which is intended for this exact problem.
IMO, the trick to really enjoying python typing is to understand it on its own terms and really get comfortable with generics and protocols.
That being said, especially for library developers, the not-yet-existant intersection type [1] can prove particularly frustrating. For example, a very frequent pattern for me is writing a decorator that adds an attribute to a function or class, and then returns the original function or class. This is impossible to type hint correctly, and as a result, anywhere I need to access the attribute I end up writing a separate "intersectable" class and writing either a typeguard or calling cast to temporarily transform the decorated object to the intersectable type.
Also, the second you start to try and implement a library that uses runtime types, you've come to the part of the map where someone should have written HERE BE DRAGONS in big scary letters. So there's that too.
So it's not without its rough edges, and protocols and overloads can be a bit verbose, but by and large once you really learn it and get used to it, I personally find that even just the value of the annotations as documentation is useful enough to justify the added work adding them.
Though to be honest I am more concerned about that function that accepts a wild variety of objects that seem to be from different domains...
I'd guess inside the function is a HUGE ladder of 'if isinstance()' to handle the various types and special processing needed. Which is totally reeking of code smell.
the issue of having multiple inputs able to be indexable by ints, is exactly why i prefer that type hints remain exactly as "hints" and not as mandated checks. my philosophy for type hints is that they are meant to make codebases easier to understand without getting into a debugger. their functional equivalence should be that of comments. it's a cleaner more concise way of describing a variable instead of using a full on docstring.
though maybe there's a path forward to give a variable a sort of "de-hint" in that in can be everything BUT this type(i.e. an argument can be any indexable type, except a string)
>though maybe there's a path forward to give a variable a sort of "de-hint" in that in can be everything BUT this type
I think this is called a negation type, and it acts like a logical NOT operator. I'd like it too, and I hear that it works well with union types (logical OR) and intersection types (logical AND) for specifying types precisely in a readable way.
The way I understand parent is that such a type would be too broad.
The bigger problem is that the type system expressed through hints in Python is not the type system Python is actually using. It's not even an approximation. You can express in the hint type system things that are nonsense in Python and write Python that is nonsense in the type system implied by hints.
The type system introduced through typing package and the hints is a tribute to the stupid fashion. But, also, there is no syntax and no formal definitions to describe Python's actual type system. Nor do I think it's a very good system, not to the point that it would be useful to formalize and study.
In Russian, there's an expression "like a saddle on a cow", I'm not sure what the equivalent in English would be. This describes a situation where someone is desperately trying to add a desirable feature to an exiting product that ultimately is not compatible with such a feature. This, in my mind, is the best description of the relationship between Python's actual type system and the one from typing package.
Close but not the same. In Russian, the expression implies an "upgrade", a failed attempt at improving something that either doesn't require improvement or cannot be improved in this particular way. This would be a typical example of how it's used: "I'm going to be a welder, I need this bachelor's degree like a saddle on a cow!".
I like your point! I think the advantage in its light is this: People often use Python because it's convention in the domain, the project already uses it, or it's the language the rest of the team uses. So, you are perhaps violating the spirit, but that's OK. You are making the most of tools available. It's not the Platonic (Pythonic??) ideal, but good enough.
I mean, you can just... Not annotate something if creating the relevant type is a pain. Static analysis \= type hints, and even then...
Besides, there must be some behavior you expect from this object. You could make a type that reflects this: IntIndexable or something, with an int index method and whatever else you need.
This feels like an extremely weak argument. Just think of it as self-enforcing documentation that also benefits auto-complete; what's not to love? Having an IntIndexable type seems like a great idea in your use case.
> The reason is that they are not really a part of the language, they violate the spirit of the language
This is a good way of expressing my own frustration with bolting strong typing on languages that were never designed to have it. I hate that TypeScript has won out over JavaScript because of this - it’s ugly, clumsy, and boilerplatey - and I’d be even more disappointed to see the same thing happen to the likes of Python and Ruby.
My background is in strongly typed languages - first C++, then Java, and C# - so I don’t hate them or anything, but nowadays I’ve come to prefer languages that are more sparing and expressive with their syntax.
> something that is indexable by ints.
> ...Dict[int, Any]...
If that is exactly what you want, then define a Protocol:
from __future__ import annotations
from typing import Protocol, TypeVar
T = TypeVar("T")
K = TypeVar("K")
class GetItem(Protocol[K, T]):
def __getitem__(self, key: K, /) -> T: ...
def first(xs: GetItem[int, T]) -> T:
return xs[0]
Then you can call "first" with a list or a tuple or a numpy array, but it will fail if you give it a dict. There is also collections.abc.Sequence, which is a type that has .__getitem__(int), .__getitem__(slice), .__len__ and is iterable. There are a couple of other useful ones in collections.abc as well, including Mapping (which you can use to do Mapping[int, t], which may be of interest to you), Reversible, Callable, Sized, and Iterable.
This is like saying you don’t like nails because you don’t understand how to use a hammer though. Developers are not understanding how to use the hints properly which is causing you a personal headache. The hints aren’t bad, the programmers are untrained - the acknowledgement of this is the first step into a saner world.
As a static typing advocate I do find it funny how all the popular dynamic languages have slowly become statically typed. After decades of people saying it's not at all necessary and being so critical of statically typed languages.
When I was working on a fairly large TypeScript project it became the norm for dependencies to have type definitions in a relatively short space of time.
People adapt to the circumstances. A lot of Python uses are no longer about fast iteration on the REPL. Instead of that we are shipping Python to execute in clusters on very long running jobs or inside servers. It's not only about having to start all over after hours, it's simply that concurrent and distributed execution environments are hostile to interactive programming. Now you can't afford to wait for an exception and launch the debugger in postmortem. Or even if you do it's not very useful.
And now my personal opinion: If we are going the static typing way I would prefer simply to use Scala or similar instead of Python with types. Unfortunately in the same way that high performance languages like C attracts premature optimizers static types attract premature "abstracters" (C++ both). I also think that dynamic languages have the largest libraries for technical merit reasons. Being more "fluid" make them easier to mix. In the long term the ecosystem converges organically on certain interfaces between libraries.
And so here we are with the half baked approach of gradual typing and #type: ignore everywhere.
* Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
* Types are incredibly valuable on hardened production code.
* Most good production code started out spikey, experimental or as an MVP and transitioned.
And so here we are with gradual typing because "throwing away all the code and rewriting it to be "perfect" in another language" has been known for years to be a shitty way to build products.
Im mystified that more people here dont see that the value and cost of types is NOT binary ("they're good! theyre bad!") but exists on a continuum that is contingent on the status of the app and sometimes even the individual feature.
> Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
I find I’ve spent so much time writing with typed code that I now find it harder to write POC code in dynamic languages because I use types to help reason about how I want to architect something.
Eg “this function should calculate x and return”, well if you already know what you want the function to do then you know what types you want. And if you don’t know what types you want then you haven’t actually decided what that function should do ahead of building it.
Now you might say “the point of experimental code is to figure out what you want functions to do”. But even if you’re writing an MVP, you should know what that each function should do by the time you’ve finished writing it. Because if you don’t know who to build a function then how do you even know that the runtime will execute it correctly?
Python doesn’t have “no types,” in fact it is strict about types. You just don’t have to waste time reading and writing them early on.
While a boon during prototyping, a project may need more structural support as the design solidifies, it grows, or a varied, growing team takes responsibility.
At some point those factors dominate, to the extent “may need” support approaches “must have.”
My point is if you don’t know what types you need, then you can’t be trusted to write the function to begin with. So you don’t actually save that much time in the end. typing out type names simply isn’t the time consuming part of prototyping.
But when it comes to refactoring, having type safety makes it very easy to use static analysis (typically the compiler) check for type-related bugs during that refactor.
I’ve spent a fair amount of years in a great many different PL paradigms and I’ve honestly never found loosely typed languages any fast for prototyping.
That all said, I will say that a lot of this also comes down to what you’re used to. If you’re used to thinking about data structures then your mind will go straight there when prototyping. If you’re not used to strictly typed languages, then you’ll find it a distraction.
Right after hello world you need a list of arguments or a dictionary of numbers to names. Types.
Writing map = {}, is a few times faster than map: Dictionary[int, str] = {}. Now multiply by ten instances. Oh wait, I’m going to change that to a tuple of pairs instead.
It takes me about three times longer to write equivalent Rust than Python, and sometimes it’s worth it.
Rust is slower to prototype than Python because Rust is a low level language. Not because it’s strictly typed. So that’s not really a fair comparison. For example, assembly doesn’t have any types at all and yet is slower to prototype than Rust.
Let’s take Visual Basic 6, for example. That was very quick to prototype in even with “option explicit” (basically forcing type declarations) defined. Quicker, even, than Python.
Typescript isn’t any slower to prototype in than vanilla JavaScript (bar setting up the build pipeline — man does JavaScript ecosystem really suck at DevEx!).
Writing map = {} only saves you a few keystrokes. And Unless you’re typing really slowly with one finger like an 80 year old using a keyboard for the first time, you’ll find the real input bottleneck isn’t how quickly you can type your data structures into code, but how quickly your brain can turn a product spec / Jira ticket into a mental abstraction.
> Oh wait, I’m going to change that to a tuple of pairs instead
And that’s exactly when you want the static analysis of a strict type system to jump in and say “hang on mate, you’ve forgotten to change these references too” ;)
Having worked on various code bases across a variety of different languages, the refactors that always scare me the most isn’t the large code bases, it’s the ones in Python or JavaScript because I don’t have a robust type system providing me with compile-time safety.
There’s an old adage that goes something like this: “don’t put off to runtime what can be done in compile time.”
As computers have gotten exponentially faster, we’ve seemed to have forgotten this rule. And to our own detriment.
I've found the transition point where types are useful to start even within a few hundred lines of code, and I've found types are not that restrictive if at all, especially if the language started out typed. The rare case I need to discard types that is available usually, and a code smell your doing something wrong.
Even within a recent toy 1h python interview question having types would've saved me some issues and caught an error that wasn't obvious. Probably would've saved 10m in the interview.
For me I often don't feel any pain-points when working before about 1kloc (when doing JS), however if a project is above 500loc it's often a tad painful to resume it months later when I've started to forget why I used certain data-structures that aren't directly visible (adding types at that point is usually the best choice since it gives a refresher of the code at the same time as doing a soundness check).
The transition to where type hints become valuable or even necessary isnt about how many lines of code you have it is about how much you rely upon their correctness.
Type strictness also isnt binary. A program with lots of dicts that should be classes doesnt get much safer just because you wrote : dict[str, dict] everywhere.
In some fields throwing away and rewriting is the standard, and it works, more or less. I'm thinking about scientific/engineering software: prototype in Python or Matlab and convert to C or C++ for performance/deployment constraints.
It happens frequently with compilers too.
I think migrating languages is actually more successful than writing second versions.
> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
This is what people say, but I don't think it's correct. What is correct is that say, ten to twenty years ago, all the statically typed languages had other unacceptable drawbacks and "types bad" became a shorthand for these issues.
I'm talking about C (nonstarter for obvious reasons), C++ (a huge mess, footguns, very difficult, presumably requires a cmake guy), Java (very restrictive, slow iteration and startups, etc.). Compared to those just using Python sounds decent.
Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).
I think Java was the main one. C/C++ are (relatively) close to the metal, system-level languages with explicit memory management - and were tacitly accepted to be the "complicated" ones, with dynamic typing not really applicable at that level.
But Java was the high-level, GCed, application development language - and more importantly, it was the one dominating many university CS studies as an education language before python took that role.
(Yeah, I'm grossly oversimplifying - sincere apologies to the functional crowd! :) )
The height of the "static typing sucks!" craze was more like a "The Java type system sucks!" craze...
I don't think types are expensive for MVP code unless they're highly complicated (but why would you do that?) Primitives and interfaces are super easy to type and worth the extra couple seconds.
PHP is a great example of the convergence of interfaces. Now they have different “PSR” standards for all sorts of things. There is one for HTTP clients, formatting, cache interfaces, etc. As long as your library implements the spec, it will work with everything else and then library authors are free to experiment on the implementation and contribute huge changes to the entire ecosystem when they find a performance breakthrough.
Types seem like a “feature” of mature software. You don’t need to use them all the time, but for the people stuck on legacy systems, having the type system as a tool in their belt can help to reduce business complexity and risk as the platform continues to age because tooling can be built to assert and test code with fewer external dependencies.
They don't. They become gradually typed which is a thing of it's own.
You can keep the advantages of dynamic languages, the ease of prototyping but also lock down stuff when you need to.
It is not a perfect union, generally the trade-off is that you can either not achieve the same safety level as in a purely statically typed language because you need to provide same escape hatches or you need a extremely complex type system to catch the expressiveness of the dynamic side. Most of the time it is a mixture of both.
Still, I think this is the way to go. Not dynamic typing won or static typing won but both a useful and having a language support both is a huge productivity boost.
> how all the popular dynamic languages have slowly become statically typed
Count the amount of `Any` / `unknown` / `cast` / `var::type` in those codebases, and you'll notice that they aren't particularly statically typed.
The types in dynamic languages are useful for checking validity in majority of the cases, but can easily be circumvented when the types become too complicated.
It is somewhat surprising that dynamic languages didn't go the pylint way, i.e. checking the codebase by auto-determined types (determined based on actual usage).
Julia (by default) does the latter, and its terrible. It makes it a) slow, because you have to do nonlocal inference through entire programs, b) impossible to type check generic library code where you have no actual usage, c) very hard to test that some code works generically, as opposed to just with these concrete types, and finally d) break whenever you have an Any anywhere in the code so the chain of type information is broken.
In the discussion of static vs dynamic typing solutions like typescript or annotated python were not really considered.
IMHO the idea of a complex and inference heavy type system that is mostly useless at runtime and compilation but focused on essentially interactive linting is relatively recent and its popularity is due to typescript success
I think that static typing proponents were thinking of something more along the lines of Haskell/OCaml/Java rather than a type-erased system a language where [1,2] > 0 is true because it is converted to "NaN" > "0"
OTH I only came to realize that I actually like duck typing in some situations when I tried to add type hints to one of my Python projects (and then removed them again because the actually important types consisted almost entirely of sum types, and what's the point of static typing if anything is a variant anyway).
E.g. when Python is used as a 'scripting language' instead of a 'programming language' (like for writing small command line tools that mainly process text), static typing often just gets in the way. For bigger projects where static typing makes sense I would pick a different language. Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).
> Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).
I'd be interested in seeing you expand on this, explaining the ways you feel Python doesn't make the cut for programming language while doing so for scripting.
The reason I say this is because, intuitively, I've felt this way for quite some time but I am unable to properly articulate why, other than "I don't want all my type errors to show up at runtime only!"
Note1:
Type hints are hints for the reader.
If you cleverly discovered that your function is handling any type of data, hint that!
Note2: From my experience, in Java, i have NEVER seen a function that consumes explicitely an Object. In Java, you always name things. Maybe with parametric polymorphism, to capture complex typing patterns.
Note 3: unfortunately, you cannot subclass String, to capture the semantic of its content.
> Java, i have NEVER seen a function that consumes explicitely an Object
So you did not see any Java code from before version 5 (in 2004) then, because the language did not have generics for the first several years it was popular. And of course many were stuck working with older versions of the language (or variants like mobile Java) without generics for many years after that.
Probably because the adoption of the generics has been absolutely massive in the last 20 years.
And I expect the same thing to eventually happen with Typescript and [typed] Python.
[*]: nor have I seen EJB1 or even EJB2. Spring just stormed them, in the last 20 years.
Sounds to be more of a symptom of the types of programs and functions you have written, rather than something inherent about types or Python. I've never encountered the type of gerry-mangled scenario you have described no matter how throwaway the code is.
AI tab-complete & fast LSP implementations made typing easy. The tools changed, and people changed their minds.
JSON's interaction with types is still annoying. A deserialized JSON could be any type. I wish there was a standard python library that deserialized all JSON into dicts, with opinionated coercing of the other types. Yes, a custom normalizer is 10 lines of code. But, custom implementations run into the '15 competing standards' problem.
Actually, there should be a popular type-coercion library that deals with a bunch of these annoying scenarios. I'd evangelize it.
> all the popular dynamic languages have slowly become statically typed
I’ve heard this before, but it’s not really true. Yes, maybe the majority of JavaScript code is now statically-typed, via Typescript. Some percentage of Python code is (I don’t know the numbers). But that’s about it.
Very few people are using static typing in Ruby, Lua, Clojure, Julia, etc.
Types become very useful when the code base reaches a certain level of sophistication and complexity. It makes sense that for a little script they provide little benefit but once you are working on a code base with 5+ engineers and no longer understand every part of it having some more strict guarantees and interfaces defined is very very helpful. Both for communicating to other devs as well as to simply eradicate a good chunk of possible errors that happen when interfaces are not clear.
Fair enough, apart from Ruby they’re all pretty niche.
OTOH I’m not arguing that most code should be dynamically-typed. Far from it. But I do think dynamic typing has its place and shouldn’t be rejected entirely.
Also, I would have preferred it if Python had concentrated on being the best language in that space, rather than trying to become a jack-of-all-trades.
I disagree for Julia, but that probably depends on the definition of static typing.
For the average Julia package I would guess, that most types are statically known at compile time, because dynamic dispatch is detrimental for performance. I consider, that to be the definition of static typing.
That said, Julia functions seldomly use concrete types and are generic by default. So the function signatures often look similar to untyped Python, but in my opinion this is something entirely different.
At least in ruby theres mayor code bases using stripes sorbet and the official RBS standard for type hints.
Notably its big code bases with large amounts of developers, fitting in with the trend most people in this discussion point to.
My last job was working at a company that is notorious for Ruby and even though I was mostly distant from it, there seemed to be a big appetite for Sorbet there.
The big difference between static typing in Python and Ruby is that Guido et al have embraced type hints, whereas Matz considers them to be (the Ruby equivalent of) “unpythonic”. Most of each language’s community follows their (ex-)BDFL’s lead.
I think you're ignoring how for some of us, gradual typing, is a far better experience than languages with static types.
For example what I like about PHPStan (tacked on static analysis through comments), that it offers so much flexibility when defining type constraints. Can even specify the literal values a function accepts besides the base type. And subtyping of nested array structures (basically support for comfortably typing out the nested structure of a json the moment I decode it).
Not ignoring, I just didn't write an essay. In all that time working with TypeScript there was very little that I found to be gradually typed, it was either nothing or everything, hence my original comment. Sure some things might throw in a bunch of any/unknown types but those were very much the rarity and often some libraries were using incredibly complicated type definitions to make them as tight as possible.
Worked with python, typescript and now php, seems that phpstan allows this gradual typing, while typescript kinda forces you to start with strict in serious projects.
Type hints and/or stronger typing in other languages are not good substitutes for testing. I sometimes worry that teams with strong preferences for strong typing have a false sense of security.
Writing and maintaining tests that just do type checking is madness.
Dynamic typing also gives tooling such as LSPs and linters a hard time figuring out completions/references lookup etc. Can't imagine how people work on moderate to big projects without type hints.
Static typing used to be too rigid and annoying to the point of being counterproductive. After decades of improvement of parsers and IDEs they finally became usable for rapid development.
Everything goes in cycles. It has happened before and it will happen again. The softward industry is incredibly bad at remembering lessons once learned.
That's because many do small things that don't really need it, sure there are some people doing larger stuff and are happy to be the sole maintainer of a codebase or replace the language types with unit-test type checks.
And I think they can be correct for rejecting it, banging out a small useful project (preferably below 1000 loc) flows much faster if you just build code doing things rather than start annotating (that quickly can be come a mind-sinkhole of naming decisions that interrupts a building flow).
However, even less complex 500 loc+ programs without typing can become a pita to read after the fact and approaching 1kloc it can become a major headache to pick up again.
Basically, can't beat speed of going nude, but size+complexity is always an exponential factor in how hard continuing and/or resuming a project is.
Thing is, famous dynamic languages of the past, Common Lisp, BASIC, Clipper, FoxPro, all got type hints for a reason, then came a new generation of scripting languages made application languages, and everyone had to relearn why the fence was in the middle of the field.
I think both found middle ground. In Java you don’t need to define the type of variables within the method. In Python people have learned types in method arguments is a good thing.
You have to admit that the size and complexity of the software we write has increased dramatically over the last few "decades". Looking back at MVC "web applications" I've created in the early 2000s, and comparing them to the giant workflows we deal with today... it's not hard to imagine how dynamic typing was/is ok to get started, but when things exceed one's "context", you type hints help.
I like static types but advocating for enforcing them in any situation is different. Adding them when you need (Python currently) seems a better strategy than forcing you to set them always (Typescript is in between as many times it can determine them).
Many years ago I felt Java typing could be overkill (some types could have been deduced from context and they were too long to write) so probably more an issue about the maturity of the tooling than anything else.
What I would need is a statically typed language that has first class primitives for working with untyped data ergonomically.
I do want to be able to write a dynamically typed function or subsystem during the development phase, and „harden” with types once I’m sure I got the structure down.
But the dynamic system should fit well into the language, and I should be able to easily and safely deal with untyped values and convert them to typed ones.
Yes, the sad part is that some people experienced early TypeScript that for some reason had the idea of forcing "class" constructs into a language where most people wasn't using or needing them (and still aren't).
Sometimes at about TypeScript 2.9 finally started adding constructs that made gradual typing of real-world JS code sane, but by then there was a stubborn perception of it being bad/bloated/Java-ish,etc despite maturing into something fairly great.
The need for typing changed, when the way the language is used changed.
When JavaScript programs were a few hundred lines to add interactivity to some website type annotationd were pretty useless.
Now the typical JavaScript project is far larger and far more complex.
The same goes for python.
dynamically-typed languages were typically created for scripting tasks - but ended up going viral (in part due to d-typing), the community stretched the language to its limits and pushed it into markets it wasn't designed/thought for (embedded python, server-side js, distributed teams, dev outsourcing etc).
personally i like the dev-sidecar approach to typing that Python and JS (via TS) have taken to mitigate the issue.
Open your javascript console, and hover the results on the left hand side of the page with your mouse. The console will display which RDF message triggered the viz in the center of the page.
Update: you may want to FIRST select the facet "DBPedia" at the top of the page, for more meaningful messages exchanged.
I think that the practically available type checkers evolved to a point where many of the common idioms can be expressed with little effort.
If one thinks back to some of the early statically typed languages, you'd have a huge rift: You either have this entirely weird world of Caml and Haskell (which can express most of what python type hints have, and could since many years), and something like C, in which types are merely some compiler hints tbh. Early Java may have been a slight improvement, but eh.
Now, especially with decent union types, you can express a lot of idioms of dynamic code easily. So it's a fairly painless way to get type completion in an editor, so one does that.
There's no dropping of type requirements in Java, `var` only saves typing.
When you use `var`, everything is as statically typed as before, you just don't need to spell out the type when the compiler can infer it. So you can't (for example) say `var x = null` because `null` doesn't provide enough type information for the compiler to infer what's the type of `x`.
var does absolutely nothing to make Java a less strictly typed language. There is absolutely no dropping of the requirement that each variable has a type which is known at compile time.
Automatic type inference and dynamic typing are totally different things.
I have not written a line of Java in at least a decade, but does Java not have any 'true' dynamic typing like C# does? Truth be told, the 'dynamic' keyword in C# should only be used in the most niché of circumstances. Typically, only practitioners of Dark Magic use the dynamic type. For the untrained, it often leads one down the path of hatred, guilt, and shame. For example:
dynamic x = "Forces of Darkness, grant me power";
Console.WriteLine(x.Length); // Dark forces flow through the CLR
x = 5;
Console.WriteLine(x.Length); // Runtime error: CLR consumed by darkness.
C# also has the statically typed 'object' type which all types inherit from, but that is not technically a true instance of dynamic typing.
Same nonsense repeated over and over again... There aren't dynamic languages. It's not a thing. The static types aren't what you think they are... You just don't know what you are saying and your conclusion is just a word salad.
What happened to Python is that it used to be a "cool" language, whose community liked to make fun of Java for their obsession with red-taping, which included the love for specifying unnecessary restrictions everywhere. Well, just like you'd expect from a poorly functioning government office.
But then everyone wanted to be cool, and Python was adopted by the programming analogue of the government bureaucrats: large corporations which treat programming as a bureaucratic mill. They don't want fun or creativity or one-of bespoke solutions. They want an industrial process that works on as large a scale as possible, to employ thousands of worst quality programmers, but still reliably produce slop.
And incrementally, Python was made into Java. Because, really, Java is great for producing slop on an industrial scale. But the "cool" factor was important to attract talent because there used to be a shortage, so, now you have Python that was remade to be a Java. People who didn't enjoy Java left Python over a decade ago. So that Python today has nothing in common with what it was when it was "cool". It's still a worse Java than Java, but people don't like to admit defeat, and... well, there's also the sunk cost fallacy: so much effort was already spent at making Python into a Java, that it seems like a good idea to waste even more effort to try to make it a better Java.
Yeah, this is the lens through which I view it. It's a sort of colonization that happens, when corporations realize a language is fit for plunder. They start funding it, then they want their people on the standards boards, then suddenly the direction of the language is matched very nicely to their product roadmap. Meanwhile, all the people who used to make the language what it was are bought or pushed out, and the community becomes something else entirely.
I love typing in Python. I learnt programming with C++ and OOPs. It was freeing when I took up Python to note care about types, but I have come to enjoy types as I got older.
But, boy have we gone overboard with this now? The modern libraries seem to be creating types for the sake of them. I am drowning in nested types that seem to never reach native types. The pain is code examples of the libraries don’t even show them.
Like copy paste an OpenAI example and see if LSP is happy for example. Now I have gotten in this situation where I am mentally avoiding type errors of some libraries and edging into wishing Pydantic et al never happened.
My love for python was critically hurt when I learned about typing.TYPE_CHECKING.
For those unaware, due to the dynamic nature of Python, you declare a variable type like this
foo: Type
This might look like Typescript, but it isn't because "Type" is actually an object. In python classes and functions are first-class objects that you can pass around and assign to variables.
The obvious problem of this is that you can only use as a type an object that in "normal python" would be available in the scope of that line, which means that you can't do this:
def foo() -> Bar:
return Bar()
class Bar:
pass
Because "Bar" is defined AFTER foo() it isn't in the scope when foo() is declared. To get around this you use this weird string-like syntax:
def foo() -> "Bar":
return Bar()
This already looks ugly enough that should make Pythonists ask "Python... what are you doing?" but it gets worse.
If you have a cyclic reference between two files, something that works out of the box in statically typed languages like Java, and that works in Python when you aren't using type hints because every object is the same "type" until it quacks like a duck, that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints. To get around this, the solution is to use this monstrosity:
if typing.TYPE_CHECKING:
import Foo from foo
And that's code that only "runs" when the static type check is statically checking the types.
Nobody wants Python 4 but this was such an incredibly convoluted way to add this feature, specially when you consider that it means every module now "over-imports" just to add type hints that they previously didn't have to.
Every time I see it makes me think that if type checks are so important maybe we shouldn't be programming Python to begin with.
There's actually another issue with ForwardRefs. They don't work in the REPL. So this will work when run as a module:
def foo() -> "Bar":
return Bar()
But will throw an error if copy pasted into a REPL.
However, all of these issues should be fixed in 3.14 with PEP649 and PEP749:
> At compile time, if the definition of an object includes annotations, the Python compiler will write the expressions computing the annotations into its own function. When run, the function will return the annotations dict. The Python compiler then stores a reference to this function in __annotate__ on the object.
> This mechanism delays the evaluation of annotations expressions until the annotations are examined, which solves many circular reference problems.
Please ignore my first assertion that the behavior between REPL and module is different.
This would have been the case if the semantics of the original PEP649 spec had been implemented. But instead, PEP749 ensures that it is not [0]. My bad.
> that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints.
Yes, `typing.TYPE_CHECKING` is there so that you can conditionally avoid imports that are only needed for type annotations. And yes, importing modules can have side effects and performance implications. And yes, I agree it's ugly as sin.
But Python does in fact allow for cyclic imports — as long as you're importing the modules themselves, rather than importing names `from` those modules. (By the way, the syntax is the other way around: `from ... import ...`.)
If the type is a class with methods, then this method doesn't work, though adding intermediate interface classes (possibly with Generic types) might help in most cases. Python static type system isn't quite the same level as F#.
> Well, these complaints are unfounded.
"You're holding it wrong." I've also coded quite a bit of OCaml and it had the same limitation (which is where F# picked it up in the first place), and while the issue can be worked around, it still seemed to creep up at times. Rust, also with some virtual OCaml ancestry, went completely the opposite way.
My view is that while in principle it's a nice property that you can read and and understand a piece of code by starting from the top and going to the bottom (and a REPL is going to do exactly that), in practice it's not the ultimate nice property to uphold.
I ran into some code recently where this pattern caused me so much headache - class A has an attribute which is an instance of class B, and class B has a "parent" attribute (which points to the instance of class A that class B is an attribute of):
class Foo:
def __init__(self, bar):
self.bar = bar
class Bar:
def __init__(self, foo):
self.foo = foo
Obviously both called into each other to do $THINGS... Pure madness.
So my suggestion: Try not to have interdependent classes :D
Well, at times having a parent pointer is rather useful! E.g. a callback registration will be able to unregister itself from everywhere where it has been registered to, upon request. (One would want to use weak references in this case.)
> If you have a cyclic reference between two files,
Don't have cyclic references between two files.
It makes testing very difficult, because in order to test something in one file, you need to import the other one, even though it has nothing to do with the test.
It makes the code more difficult to read, because you're importing these two files in places where you only need one of them, and it's not immediately clear why you're importing the second one. And it's not very satisfying to learn that you you're importing the second one not because you "need" it but because the circular import forces you to do so.
Every single time you have cyclic references, what you really have are two pieces of code that rely on a third piece of code, so take that third piece, separate it out, and have the first two pieces of code depend on the third piece.
Now things can be tested, imports can be made sanely, and life is much better.
Using the typical "Rust-killer" example: if you have a linked list where the List in list.py returns a Node type and Node in node.py takes a List in its constructor, you already have a cyclic reference.
On the other hand, I tend to take it as a hint that I should look at my module structure, and see if I can avoid the cyclic import (even if before adding type hints there was no error, there still already was a "semantic dependency"...)
You're actually missing the benefit of this. It's actually a feature.
With python, because types are part of python itself, they can thus be programmable. You can create a function that takes in a typehint and returns a new typehint. This is legal python. For example below I create a function that dynamically returns a type that restricts a Dictionary to have a specific key and value.
With this power in theory you can create programs where types essentially can "prove" your program correct, and in theory eliminate unit tests. Languages like idris specialize in this. But it's not just rare/specialized languages that do this. Typescript, believe it or not, has programmable types that are so powerful that writing functions that return types like the one above are Actually VERY common place. I was a bit late to the game to typescript but I was shocked to see that it was taking cutting edge stuff from the typing world and making it popular among users.
In practice, using types to prove programs to be valid in place of testing is actually a bit too tedious compared with tests so people don't go overboard with it. It is a much more safer route then testing, but much harder. Additionally as of now, the thing with python is that it really depends on how powerful the typechecker is on whether or not it can enforce and execute type level functions. It's certainly possible, it's just nobody has done it yet.
I'd go further than this actually. Python is actually a potentially more powerfully typed language than TS. In TS, types are basically another language tacked onto javascript. Both languages are totally different and the typing language is very very limited.
The thing with python is that the types and the language ARE the SAME thing. They live in the same universe. You complained about this, but there's a lot of power in that because basically types become turing complete and you can create a type that does anything including proving your whole program correct.
Like I said that power depends on the typechecker. Someone needs to create a typechecker that can recognize type level functions and so far it hasn't happened yet. But if you want to play with a language that does this, I believe that language is Idris.
And, as you heavily imply in your post, type checkers won't be able to cope with it, eliminating one if the main benefits of type hints. Neither will IDEs / language servers, eliminating the other main benefit.
The syntax is a monstrosity. You can also extract a proven OCaml program from Coq and Coq has a beautiful syntax.
If you insist on the same language for specifying types, some Lisp variants
do that with a much nicer syntax.
Python people have been indoctrinated since ctypes that a monstrous type syntax is normal and they reject anything else. In fact Python type hints are basically stuck on the ctypes level syntax wise.
I don't believe Typescript (nor Idris) type systems work like you describe, though? Types aren't programmable with code like that (in the same universe, as you say) and TS is structurally typed, with type erasure (ie types are not available at runtime).
I am not that deeply familiar with Python typings development but it sounds fundamentally different to the languages you compare to.
THe powerful thing about these languages is that they can prove your program correct. For testing you can never verify your program to be correct.
Testing is a statistical sampling technique. To verify a program as correct via tests you have to test every possible input and output combination of your program, which is impractical. So instead people write tests for a subset of the possibilities which ONLY verifies the program as correct for that subset. Think about it. If you have a function:
def add(x: int, y: int) -> int
How would you verify this program is 100% correct? You have to test every possible combination of x, y and add(x, y). But instead you test like 3 or 4 possibilities in your unit tests and this helps with the overall safety of the program because of statistical sampling. If a small sample of the logic is correct, it says something about the entire population of the logic..
Types on the other hand prove your program correct.
def add(x: int, y: int) -> int:
return x + y
If the above is type checked, your program is proven correct for ALL possible types. If those types are made more advanced via being programmable, then it becomes possible for type checking to prove your ENTIRE program correct.
Imagine:
def add<A: addable < 4, B: addable < 4>(x: A, y: B) -> A + B:
return x + y
With a type checker that can analyze the above you can create a add function that at most can take an int that is < 4 and return an int that is < 8. Thereby verifying even more correctness of your addition function.
Python on the other hand doesn't really have type checking. It has type hints. Those type hints can de defined in the same language space as python. So a type checker must read python to a limited extent in order to get the types. Python at the same time can also read those same types. It's just that python doesn't do any type checking with the types while the type checker doesn't do anything with the python code other than typecheck it.
Right now though, for most typecheckers, if you create a function in python that returns a typehint, the typechecker is not powerful enough to execute that function to find the final type. But this can certainly be done if there was a will because Idris has already done this.
Are there really productive projects which rely on types as a proofing system? I've always thought it added too much complexity to the code, but I'd love to see it working well somewhere. I love the idea of correctness by design.
No too my knowledge nothing is strict about a proofing system because like I said it becomes hard to do. It could be useful for ultra safe software but for most cases the complexity isn't worth it.
But that doesn't mean it's not useful to have this capability as part of your typesystem. It just doesn't need to be fully utilized.
You don't need to program a type that proves everything correct. You can program and make sure aspects of the program are MORE correct than just plain old types. typescript is a language that does this and it is very common to find types in typescript that are more "proofy" than regular types in other languages.
Typescript does this. Above there's a type that's only a couple of lines long that proves a string reversal function reverses a string. I think even going that deep is overkill but you can define things like Objects that must contain a key of a specific string where the value is either a string or a number. And then you can create a function that dynamically specifies the value of the key in TS.
I think TS is a good example of a language that practically uses proof based types. The syntax is terrible enough that it prevents people from going overboard with it and the result is the most practical application of proof based typing that I seen. What typescript tells us that proof based typing need only be sprinkled throughout your code, it shouldn't take it all over.
That's horrible. Nobody needs imperative metaprogramming for type hints. In fact, it would be absolute insanity for a typechecker to check this because it would mean opening a file in VS code = executing arbitrary python code. What stops me from deleting $HOME inside make_typed_dict?
TypeScript solves this with its own syntax that never gets executed by an interpreter because types are striped when TS is compiled to JS.
>VS code = executing arbitrary python code. What stops me from deleting $HOME inside make_typed_dict?
Easy make IO calls illegal in the type checker. The type checker of course needs to execute code in a sandbox. It won't be the full python language. Idris ALREADY does this.
I want to say it (or something similar at least) was originally addressed by from __future__ import annotations back in Python 3.7/3.8 or thereabouts? I definitely remember having to use stringified types a while back but I haven't needed to for quite a while now.
It turns them into thunks (formerly strings) automatically, an important detail if you're inspecting annotations at run time because the performance hit of resolving the actual type can be significant.
> But, boy have we gone overboard with this now? The modern libraries seem to be creating types for the sake of them. I am drowning in nested types that seem to never reach native types.
Thought you were talking about TypeScript for a moment there.
I learned C++ before learning python as well and python felt like a breath of fresh air.
At first I thought it was because of the lack of types. But in actuality the lack of types was a detriment for python. It was an illusion. The reason why python felt so much better was because it had clear error messages and a clear path to find errors and bugs.
In C++ memory leaks and seg faults are always hidden from view so EVEN though C++ is statically typed, it's actually practically less safe then python and much more harder to debug.
The whole python and ruby thing exploding in popularity back in the day was a trick. It was an illusion. We didn't like it more because of the lack of typing. These languages were embraced because they weren't C or C++.
It took a decade for people to realize this with type hints and typescript. This was a huge technical debate and now all those people were against types are proven utterly wrong.
The best kind of documentation is the kind you can trust is accurate. Type defs wouldn't be close to as useful if you didn't really trust them. Similarly, doctests are some of the most useful documentation because you can be sure they are accurate.
The best docs are the ones you can trust are accurate. The second best docs are ones that you can programmatically validate. The worst docs are the ones that can’t be validated without lots of specialized effort.
I’d almost switch the order here! In a world with agentic coding agents that can constantly check for type errors from the language server powering the errors/warnings in your IDE, and reconcile them against prose in docstrings… types you can programmatically validate are incredibly valuable.
When I wrote that, I was thinking about typed, compiled languages' documentation generated by the compiler at build time. Assuming that version drift ("D'oh, I was reading the docs for v1.2.3 but running v4.5.6") is user error and not a docs-trustworthiness issue, that'd qualify.
But now that I'm coming back to it, I think that this might be a larger category than I first envisioned, including projects whose build/release processes very reliably include the generation+validation+publication of updated docs. That doesn't imply a specific language or release automation, just a strong track record of doc-accuracy linked to releases.
In other words, if a user can validate/regenerate the docs for a project, that gets it 9/10 points. The remaining point is the squishier "the first party docs are always available and well-validated for accuracy" stuff.
This resonates with me this so much. I feel like half the comments in this thread are missing the value typing, but maybe they've never had the misfortune of working with hundreds of other developers on a project with no defined contracts on aggregates / value objects outside of code comments and wishful thinking.
I've worked on large python codebases for large companies for the last ~6 years of my career; types have been the single biggest boon to developer productivity and error reduction on these codebases.
Just having to THINK about types eliminates so many opportunities for errors, and if your type is too complex to express it's _usually_ a code smell; most often these situations can be re-written in a more sane albeit slightly verbose fashion, rather than using the more "custom" typing features.
No one gets points for writing "magical" code in large organizations, and typing makes sure of this. There's absolutely nothing wrong with writing "boring" python.
Could we have accomplished this by simply having used a different language from the beginning? Absolutely, but often times that's not a option for a company with a mature stack.
TL;DR -- Typing in python is an exception tool to scale your engineering organization on a code-base.
The correct response to this is to figure what is the use case for your function: IE: add two numbers. Set the input and output as decimal and call it a day
Actually, it's not missing the point. Sometimes you really do want duck typing, in which case you allow Any. It's not all-or-nothing.
What the reddit post is demonstrating is that the Python type system is still too naive in many respects (and that there are implementation divergences in behavior). In other languages, this is a solved problem - and very ergonomic and safe.
As the top comment says, if you don't know or want to define the type just use Any. That's what it's there for.
That entire Reddit post is a clueless expert beginner rant about something they don't really understand, unfortunate that it's survived as long as it has or that anyone is taking it as any sort of authoritative argument just because it's long.
That's not the issue the reddit post is raising. The reddit post is pointing out that what a "type" is is not as simple as it looks. Particularly in a language like Python where user-defined types proliferate, and can add dunder methods that affect statements that involve built-in operations. "Just use Any" doesn't solve any of those problems.
> just use Any.
All the above said: not putting a type in at all is even easier than using Any, and is semantically equivalent.
The Reddit post falls under the case of "don't know" the type. If you want to allow users to pass in any objects, try to add and fail at runtime... that's exactly what Any is for.
But the entire post is built upon the premise that accepting all types is good API design. Which it isn't, at all.
> The Reddit post falls under the case of "don't know" the type.
No, it doesn't. The desired type is known; it's "Addable" (i.e., "doesn't throw an exception when the built-in add operator is used"). The problem is expressing that in Python's type notation in a way that catches all edge cases.
> If you want to allow users to pass in any objects, try to add and fail at runtime
Which is not what the post author wants to do. They want to find a way to use Python's type notation to catch those errors with the type checker, so they don't happen at runtime.
> the entire post is built upon the premise that accepting all types is good API design
It is based on no such thing. I don't know where you're getting that from.
> The desired type is known; it's "Addable" (i.e., "doesn't throw an exception when the built-in add operator is used").
The mistake both you and the reddit posts' author make is treating the `+` operator the same as you would an interface method. Despite Python having __add__/__radd__ methods, this isn't true, nor is it true in many other programming languages. For example, Go doesn't have a way to express "can use the + operator" at all, and "can use comparison operators" is defined as an explicit union between built-in types.[0] In C# you could only do this as of .NET 7, which was released in Nov 2022[1] -- was the C# type system unusable for the 17 years prior, when it didn't support this scenario?
If this were any operation on `a` and `b` other than a built-in operator, such as `a.foo(b)`, it would be trivial to define a Protocol (which the author does in Step 4) and have everything work as expected. It's only because of misunderstanding of basic Python that the author continues to struggle for another 1000 words before concluding that type checking is bad. It's an extremely cherry-picked and unrealistic scenario either from someone who is clueless, or knows what they're doing and is intentionally being malicious in order to engagement bait.[2]
This isn't to say Python (or Go, or C#) has the best type system, and it certainly lacks compared to Rust which is a very valid complaint, but "I can't express 'type which supports the '+' operator'" is an insanely esoteric and unusual case, unsupported in many languages, that it's disingenuous to use it as an excuse for why people shouldn't bother with type hinting at all.
[2] actually reading through the reddit comments, the author specifically says they were engagement baiting so... I guess they had enough Python knowledge to trick people into thinking type hinting was bad, fair enough!
> treating the `+` operator the same as you would an interface method
In other words, you agree that the Python type hint system does not give you a good, built-in way to express the "Addable" type.
Which means you are contradicting your claims that the type the article wants to express is "unknown" and that the article is advocating using "Any" for this case. The type is not unknown--it's exactly what I said: "doesn't throw an exception when using the + operator". That type is just not expressible in Python's type hint system in the way that would be needed. And "Any" doesn't address this problem, because the article is not saying that every pair of objects should be addable.
> "I can't express 'type which supports the '+' operator'" is an insanely esoteric and unusual case
I don't see why. Addition is a very commonly used operation, and being able to have a type system that can express "this function takes two arguments that can be added using the addition operator" seems like something any type system that delivers the goods it claims to deliver ought to have.
> unsupported in many languages
Yes, which means many languages have type systems that claim to deliver things they can't actually deliver. They can mostly deliver them, but "mostly" isn't what advocates of using type systems in all programs claim. So I think the article is making a useful point about the limitations of type systems.
> it's disingenuous to use it as an excuse for why people shouldn't bother with type hinting at all.
The article never says that either. You are attacking straw men.
> I don't see why. Addition is a very commonly used operation, and being able to have a type system that can express "this function takes two arguments that can be added using the addition operator" seems like something any type system that delivers the goods it claims to deliver ought to have.
If your comparison is Rust, sure, but you can't even express this in Java. No, Java's type system is not great, but it's a type system that's been used for approximately 500 trillion lines of production code powering critical systems and nobody has ever said "Java sucks because I can't express 'supports the + operator' as a generic type". (It sucks for many other reasons.)
Again, it is factually and objectively an esoteric and unusual case. Nobody in the real world is writing generics like this, only academics or people writing programming blogs about esoterica.
If your argument is that all type systems are bad or deficient, fine, but calling out Python for this when it has the exact same deficiency as basically every other mainstream language is asinine.
> The article never says that either. You are attacking straw men.
The article says "Turning even the simplest function that relied on Duck Typing into a Type Hinted function that is useful can be painfully difficult." The subterfuge is that this is not even remotely close to a simple function because the type being expressed, "supports the + operator", is not even remotely close to a simple type.
> it is factually and objectively an esoteric and unusual case.
Sorry, but your unsupported opinion is not "factual and objective".
> If your argument is that all type systems are bad or deficient
I said no such thing, any more than the article did. Again you are attacking a straw man. (If you had said "limited in what they can express", I might buy that. But you didn't.)
I think I've said all I have to say in this subthread.
It's factual and objective that billions, if not trillions of lines of Java and Go have been deployed and the language still cannot express "supports the + operator" as a type constraint. In production, non-academic settings, people don't generally write code like that.
Again, this is an esoteric limitation from the perspective of writing code that runs working software, not a programming language theory perspective.
How many of those lines of code would have benefited from being able to express that type constraint, if the language made it possible?
You have no idea, and nor does anyone else. But that's what you would need "factual and objective" evidence about to support the claim you made.
By your argument, anything that programming languages don't currently support, must be an "esoteric limitation" because billions if not trillions of lines of code have been written without it. Which would mean programming languages would never add new features at all. But it's certainly "factual and objective" that programming languages add new features all the time. Maybe this is another feature that at some point a language will add, and programmers will find it useful. You don't even seem to be considering such a possibility.
> But the entire post is built upon the premise that accepting all types is good API design. Which it isn't, at all.
Was Tim Peters also wrong way back in the day when he counseled Guido van Rossum to allow floats to be added to integers without a cast, like other popular languages?
If your implication is that "implementing __add__ means you can use the + operator", you are incorrect. This is a common Python beginner mistake, but it isn't really a Python type checking issue, this is complexity with Python built-ins and how they interact with magic methods.
This is a strange and aggressive bit of pedantry. Yes, you'd also need `__radd__` for classes that participate in heterogenous-type addition, but it's clear what was meant in context. The fundamentals are not all "beginner" level and beginners wouldn't be implementing operator overloads in the first place (most educators hold off on classes entirely for quite a while; they're pure syntactic sugar after all, and the use case is often hard to explain to beginner).
Regardless, none of that bears on the original `slow_add` example from the Reddit page. The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way. Because the rule is something like "anything that says it can be added according to the protocol — which in practical terms is probably any two roughly-numeric types except for the exceptions, and also most container types but only with other instances of the same type, and also some third-party things that represent more advanced mathematical constructs where it makes sense".
And saying "don't rely on magic methods" does precisely nothing about the fact that people want the + symbol in their code to work this way. It does suggest that `slow_add` is a bad thing to have in an API (although that was already fairly obvious). But in general you do get these issues cropping up.
Dynamic typing has its place, and many people really like it, myself included. Type inference (as in the Haskell family) solves the noise problem (for those who consider it a problem rather than something useful) and is elegant in itself, but just not the strictly superior thing that its advocates make it out to be. People still use Lisp family languages, and for good reason.
But maybe Steve Yegge would make the point better.
> This is a strange and aggressive bit of pedantry.
There's nothing pedantic about it. That's how Python works, and getting into the nuts and bolts of how Python works is precisely why the linked article makes type hinting appear so difficult.
> The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way.
As the post explores, your intuition is also incorrect. For example, as the author discovers in the process, addition via __add__/__radd__ is not addition in the algebraic field sense. There is no guarantee that adding types T + T will yield a T. Or that both operands are of the same type at all, as would be the case with "adding" a string and int. Or that A + B == B + A. We can't rely on intuition for type systems.
No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense. When language designers are revisiting the decision to use the `+` symbol to represent string concatenation, how many of them are thinking about algebraic fields, seriously?
And all of this is exactly why you can't just say that it's universally bad API design to "accept all types". Because the alternative may entail rejecting types for no good reason. Again, dynamically typed languages exist for a reason and have persisted for a reason (and Python in particular has claimed the market share it has for a reason) and are not just some strictly inferior thing.
> you can't just say that it's universally bad API design to "accept all types"
Note, though, that that's not really the API design choice that's at stake here. Python will still throw an exception at runtime if you use the + operator between objects that don't support being added together. So the API design choice is between that error showing up as a runtime exception, vs. showing up as flagged by the type checker prior to runtime.
Or, to put it another way, the API design choice is whether or not to insist that your language provide explicit type definitions (or at least a way to express them) for every single interface it supports, even implicit ones like the + operator, and even given that user code can redefine such interfaces using magic methods. Python's API design choice is to not care, even with its type hinting system--i.e., to accept that there will be interface definitions that simply can't be captured using the type hinting system. I personally am fine with that choice, but it is a design choice that language users should be aware of.
> No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense.
Huh? There's no restriction in Python's type system that says `+` has to "make sense".
import requests
class Smoothie:
def __init__(self, fruits):
self.fruits = fruits
def __repr__(self):
return " and ".join(self.fruits) + " smoothie"
class Fruit:
def __init__(self, name):
self._name = name
def __add__(self, other):
if isinstance(other, Fruit):
return Smoothie([self._name, other._name])
return requests.get("https://google.com")
if __name__ == "__main__":
print(Fruit("banana") + Fruit("mango"))
print(Fruit("banana") + 123)
> banana and mango smoothie
> <Response [200]>
So we have Fruit + Fruit = Smoothie. Overly cute, but sensible from a CS101 OOP definition and potentially code someone might encounter in the real world, and demonstrates how not all T + T -> T. And we have Fruit + number = requests.Response. Complete nonsense, but totally valid in Python. If you're writing a generic method `slow_add` that needs to support `a + b` for any two types -- yes, you have to support this nonsense.
I guess that's the difference between the Python and the TypeScript approach here. In general, if something is possible, valid, and idiomatic in JavaScript, then TypeScript attempts to model it in the type system. That's how you get things like conditional types and mapped types that allow the type system to validate quite complex patterns. That makes the type system more complex, but it means that it's possible to use existing JavaScript patterns and code. TypeScript is quite deliberately not a new language, but a way of describing the implicit types used in JavaScript. Tools like `any` are therefore an absolute last resort, and you want to avoid it wherever possible.
When I've used Python's type checkers, I have more the feeling that the goal is to create a new, typed subset of the language, that is less capable but also easier to apply types to. Then anything that falls outside that subset gets `Any` applied to it and that's good enough. The problem I find with that is that `Any` is incredibly infective - as soon as it shows up somewhere in a program, it's very difficult to prevent it from leaking all over the place, meaning you're often back in the same place you were before you added types, but now with the added nuisance of a bunch of types as documentation that you can't trust.
I didn't. I've been mainly a Python, PHP and JavaScript programmer for ~25 years and my experience with typed languages was mostly pre-type-inference Java which felt wildly less productive than my main languages.
Maybe if your C has aggressive test coverage and you’re using Valgrind religiously and always checking errno when you’re supposed to and you’re checking the return value of everything. Otherwise lol. C as it’s written by middling teams is a soup of macros, three-star variables, and questionable data structure implementations, where everybody fiddles with everybody else’s data. I’ll take good C over bad Python, but good C is rare.
> C as it’s written by middling teams is a soup of macros, three-star variables, and questionable data structure implementations, where everybody fiddles with everybody else’s data. I’ll take good C over bad Python, but good C is rare.
Ironically, the worst production C written in 2025 is almost guaranteed to be better than the average production Python, Javascript, etc.
The only people really choosing C in 2025 are those with a ton of experience under their belt, who are comfortable with the language and its footguns due to decades of experience.
IOW, those people with little experience are not choosing C, and those that do choose it have already, over decades, internalised patterns to mitigate many of the problems.
At the end of the day, in 2025, I'd still rather maintain a system written in a statically typed language than a system written in a dynamically typed language.
> The only people really choosing C in 2025 are those with a ton of experience under their belt, who are comfortable with the language and its footguns due to decades of experience.
Experienced users of C can't be the only people who use it if the language is going to thrive. It's very bad for a language when the only ones who speak it are those who speak it well. The only way you get good C programmers is by cultivating bad C programmers, you can't have one without the other. If you cut off the bad programmers (by shunning or just not appealing to them, or loading your language with too many beginner footguns), there's no pipeline to creating experts, and the language dies when the experts do.
The people who come along to work on their legacy systems are better described as archaeologists than programmers. COBOL of course is the typical example, there's no real COBOL programming community to speak of, just COBOL archeologists who maintain those systems until they too shall die and it becomes someone else's problem, like the old Knight at the end of Indiana Jones.
I find automated tests give me plenty of confidence in the Python code I deploy. I'd rather deploy a codebase with comprehensive tests and no types over one with types and no tests.
I've been dabbling with Go for a few projects and found the type system for that to be pleasant and non-frustrating.
I feel like Go is a very natural step from Python because it's still pretty easy and fast to start with.
(and if you want to embrace static types, the language starting with them might get advantages over an optional backwards compatible type system)
You may have read this already but the biggest surprise one of the Go creators had was Go was motivated by unhappiness with C++, and they expected to get C++ users, but instead people came from Python and Ruby: https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
> I'd rather deploy a codebase with comprehensive tests and no types over one with types and no tests.
With Python, PHP and Javascript, you only option is "comprehensive tests and no types".
With statically typed languages, you have options other than "types with no tests". For example, static typing with tests.
Don't get me wrong; I like dynamically typed languages. I like Lisp in particular. But, TBH, in statically typed languages I find myself producing tests that test the business logic, while in Python I find myself producing tests that ensure all callers in a runtime call-chain have the correct type.
BTW: You did well to choose Go for dipping your toes into statically typed languages - the testing comes builtin with the tooling.
This is a naive realization. When type checking is used to the maximum extent they become as just as important as unit testing. It is an actual safety contribution to the code.
Many old school python developers don't realize how important typing actually is. It's not just documentation. It can actually roughly reduce dev time by 50% and increase safety by roughly 2x.
It's claims like that which used to put me off embracing type hints!
I'd been programming for 20+ years and I genuinely couldn't think of any situations where I'd had a non-trivial bug that I could have avoided if I'd had a type checker - claims like "reduce dev time by 50%" didn't feel credible to me, so I stuck with my previous development habits.
Those habits involved a lot of work performed interactively first - using the Python terminal, Jupyter notebooks, the Firefox/Chrome developer tools console. Maybe that's why I never felt like types were saving me any time (and in fact were slowing me down).
Then I had my "they're just interactive documentation" realization and finally they started to click for me.
It depends on the project. If you're working always on one project and you have all the time in the world to learn it (or maybe you wrote it), then you can get away with dynamic types. It's still worse but possible.
But if you aren't familiar with a project then dynamic typing makes it an order of magnitude harder to navigate and understand.
I tried to contribute some features to a couple of big projects - VSCode and Gitlab. VSCode, very easy. I could follow the flow trivially, just click stuff to go to it etc. Where abstract interfaces are used it's a little more annoying but overall wasn't hard and I have contributed a few features & fixes.
Gitlab, absolutely no chance. It's full of magically generated identifiers so even grepping doesn't work. If you find a method like `foo_bar` it's literally impossible to find where it is called without being familiar with the entire codebase (or asking someone who is) and therefore knowing that there's a text file somewhere called `foo.csv` that lists `bar` and the method name is generated from that (or whatever).
In VSCode it was literally right-click->find all references.
I have yet to succeed in modifying Gitlab at all.
I did contribute some features to gitlab-runner, but again that is written in Go so it is possible.
So in some cases those claims are not an exaggeration - static types take you from "I give up" to "not too hard".
> In VSCode it was literally right-click->find all references.
Flip side of this is that I hate trying to read code written by teams relying heavily on such features, since typically zero time was spent on neatly organizing the code and naming things to make it actually readable (from top to bottom) or grep-able. Things are randomly spread out in tiny files over countless directories and it's a maze you stumble around just clicking identifiers to jump somewhere. Where something is rarely matter as the IDE will find it. I never develop any kind of mental image of that style of code and it completely rules out casually browsing the code using simpler tools.
That hasn't been my experience at all. I think maybe it feels more like a maze because when you go-to-definition you often don't actually check where you are in the filesystem, so you don't build a mental map of the repo as quickly as you do when you are forced to manually search through all the files. But I wouldn't say that is better.
Kind of like how you don't learn an area when you always use satnav as quickly as you do when you manually navigate with paper maps. But do you want to go back to paper maps? I don't.
Static type checking (which is what I assume you mean by "typing") can also be a massive pain in the ass that stands in the way of incremental development, even if the end-goal is to ship an api with clear type signatures.
There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
> Many old school python developers don't realize how important typing actually is.
I don't think this is true. There's simply a communication breakdown where type-first developers don't see the benefits of disabling static checking to design interfaces, and interface-first developers don't see why they should put static checking ahead of interface iteration speed.
> Static type checking (which is what I assume you mean by "typing") can also be a massive pain in the ass that stands in the way of incremental development,
No they dont. There is nothing about types that would make incremental develpment harder. They keep having the same benefits when being incremental.
> There is nothing about types that would make incremental develpment harder.
Oh, please, this is either lack of imagination or lack of effort to think. You've never wanted to test a subset of a library halfway through a refactor?
Yes, type checkers are very good at tracking refactoring progress. If it turns out that you can proceed to test some subset, then congratulations, you found a new submodule.
What in the world are you talking about. Please specify how lack of types helped you in your aforementioned scenario.
I don't think it's a lack of curiosity from others. But it's more like fundamental lack of knowledge from you. Let's hear it. What is it are you actually talking about? Testing a subset of a library halfway though a refactor? How does a lack of types help with that?
> There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
My hunch is that the people who see no downsides whatsoever in static typing are those who mostly just consume APIs.
There are downsides. But the upsides outweigh the downsides.
I'm not a consumer of APIs. I've done game programming, robotics, embedded system development (with C++ and rust), (web development frontend with react/without react, with jquery, with angurar, with typescript, with js, zod) (web development backend with golang, haskell, nodejs typescript, and lots and lots of python with many of the most popular frameworks with flask + sqlalchemy, django, FastApi + pydantic, )
I've done a lot. I can tell you. If you don't see how types outweigh untyped languages, you're a programmer with experience heavily weighed toward untyped programming. You don't have balanced experience to make a good judgement. Usually these people have a "data scientist" background. Data analyst or data scientist or machine learning engineers... etc. These guys start programing heavily in the python world WITHOUT types and they develop unbalanced opinions shaped by their initial styles of programming. If this describes you, then stop and think... I'm probably right.
You are wrong. I learned programming mostly in C++ in the late 90's, and programmed in C, C++ and Java in professional settings for a decade or so, and still do from time to time.
Hm if you want or have time, can you give me a specific example of where no types are clearly superior to types? Maybe you can convince me but I still think your opinion is wrong despite your relevant experience.
>There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
No, you're one of the old school python developers. Types don't hinder creativity, they augment it. The downside is the slight annoyance of updating a type definition and the run time definition vs. just updating the runtime definition.
Let me give you an example of how it hinders creativity.
Let's say you have a interface that is immensely complex. Many nested structures thousands of keys, and let's say you want to change the design by shifting 3 or 4 things around. Let's also say this interface is utilized by hundreds of other methods and functions.
When you move 3 or 4 things around in a complex interface you're going to break a subset of those hundreds of other methods or functions. You're not going to know where they break if you don't have type checking enabled. You're only going to know if you tediously check every single method/function OR if it crashes during runtime.
With a statically typed definition you can do that change and the type checker will identify EVERY single place where an additional change to the methods that use that type needs to be changed as well. This allows you to be creative and make any willy nilly changes you want because you are confident that ANY change will be caught by the type checker. This Speeds up creativity, while without it, you will be slowed down, and even afraid to make the breaking change.
You are basically the stereotype I described. An old school python developer. Likely one who got used to programming without types and now hasn't utilized types extensively enough to see the benefit.
>I don't think this is true. There's simply a communication breakdown where type-first developers don't see the benefits of disabling static checking to design interfaces, and interface-first developers don't see why they should put static checking ahead of interface iteration speed.
This is true. You're it. You just don't know it. When I say these developers don't know I'm literally saying they think like you and believe the same things you believe BECAUSE they lack knowledge and have bad habits.
The habit thing is what causes the warped knowledge. You're actually slowed down by types because you're not used to it as you spent years coding in python without types so it's ingrained for you to test and think without types. Adding additional types becomes a bit of a initial overhead for these types of people because their programming style is so entrenched.
Once you get used to it and once you see that it's really just a slight additional effort, than you will get it. But it takes a bit of discipline and practice to get there.
> It can actually roughly reduce dev time by 50% and increase safety by roughly 2x.
Type annotations don’t double productivity. What does “increase safety by 2×” even mean? What metric are you tracking there?
In my experience, the main non-documentation benefit of type annotations is warning where the code is assuming a value where None might be present. Mixing up any other kind of types is an extremely rare scenario, but NoneType gets everywhere if you let it.
Obviously this post is still firmly in made up statistics land, but i agree with OP, in some cases they absolutely do.
New code written by yourself? No, probably not. But refactoring a hairy old enterprise codebase? Absolutely a 2×, 3× multiplier to productivity / time-to-correctness there.
>Type annotations don’t double productivity. What does “increase safety by 2×” even mean? What metric are you tracking there?
My own anecdotal metric. Isn't that obvious? The initial post was an anecdotal opinion as well. I don't see a problem here.
>In my experience, the main non-documentation benefit of type annotations is warning where the code is assuming a value where None might be present. Mixing up any other kind of types is an extremely rare scenario, but NoneType gets everywhere if you let it.
It's not just None. Imagine some highly complex object with nested values and you have some function like this:
def modify_direction(direction_object) -> ...
wtf is direction object? Is it in Cartesian or is it in polar? Is in 2D or 3D? Most old school python devs literally have to find where modify_direction is called and they find this:
And then boom you figure out what it does by actually reading all the complex quaternion math create_quat does.
Absolutely insane. If I have a type, I can just look at the type to figure everything out... you can see how much faster it is.
Oh and get this. Let's say there's someone who feels euler angles are better. So he changes create_quat to create_euler. He modifies all the places create_quat is used (which is about 40 places) and he misses 3 or 4 places where it's called.
He then ships it to production. Boom The extra time debugging production when it crashes, ans also extra time tediously finding where create_quat was used. All of that could have been saved by a type checker.
I'm a big python guy. But I'm also big into haskell. So I know both the typing worlds and the untyped worlds really well. Most people who complain like you literally have mostly come from a python background where typing isn't used much. Maybe you used types occasionally but not in a big way.
If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
> If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
Or have enough experience to have lived e.g. the J2EE and C++ template hells and see where this is going.
> My own anecdotal metric. Isn't that obvious? The initial post was an anecdotal opinion as well. I don't see a problem here.
WTF is “an anecdotal metric”‽ That just sounds like an evasive way to say “I want to make up numbers I can’t justify”.
> wtf is direction object? Is it in Cartesian or is it in polar? Is in 2D or 3D?
This seems very domain-specific.
> Most people who complain like you literally have mostly come from a python background where typing isn't used much. Maybe you used types occasionally but not in a big way.
> If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
I’ve got many years of experience with static typed languages over a 25 year career. Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
It's a metric (how much more productive he is), and anecdotal (base only on his experience). Pretty obvious I would have thought.
> This seems very domain-specific.
It was an example from one domain but all domains have types of things. Are you really trying to say that only 3D games specifically would benefit from static types?
> Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
Clueless senior then I guess? Honestly I don't know how you can have this much experience and still not come to the obvious conclusion. Perhaps you only write small scripts or solo projects where it's more feasible to get away without static types?
What would you say to someone who said "I have 25 years of experience reading books with punctuation and I think that punctuation is a waste of time. Just because you disagree with me doesn't mean I'm clueless."?
>WTF is “an anecdotal metric”‽ That just sounds like an evasive way to say “I want to make up numbers I can’t justify”.
What I have to have scientific papers for every fucking opinion I have? The initial Parent post was an anecdotal opinion. Your post is an opinion. I can't have opinions here without citing a scientific paper that's 20 pages long and no is going to read but just blindly trust because it's "science"? Come on. What I'm saying is self evident to people who know. There are thousands of things like this in the world where people just know even though statistical proof hasn't been measured or established. For example eating horse shit everyday probably isn't healthy even though it there isn't SCIENCE that proves this action as unhealthy directly. Type checking is just one of those things.
OBVIOUSLY I think development is overall much better, much faster and much safer with types. I can't prove it with metrics, but I'm confident my "anecdotal" metrics with I prefaced with "roughly" are "roughly" ballpark trueish.
>This seems very domain-specific.
Domain specific? Basic orientation with quaternions and euler angles is specific to reality. Orientation and rotations exist in reality and there are thousands and thousands of domains that use it.
Also the example itself is generic. Replace euler angles and quats with vectors and polar coordinates. Or cats and dogs. Same shit.
>I’ve got many years of experience with static typed languages over a 25 year career. Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
The amount of years of experience is irrelevant. I know tons of developers with only 5 years of experience who are better than me and tons of developers with 25+ who are horrible.
I got 25 years as well. If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact. It's not an insult. It just means for a specific thing they don't have experience or knowledge which is typical. I'm sure there's tons of things where you could have more experience. Just not this topic.
If you have experience with static languages it likely isn't that extensive. You're likely more of a old school python guy who spend a ton of time programming without types.
> What I have to have scientific papers for every fucking opinion I have?
No, but if you’re going to say things like “increase safety by roughly 2x” then if you can’t even identify the unit then you are misleading people.
It’s absolutely fine to have an opinion. It’s not fine to make numbers up.
> I'm confident my "anecdotal" metrics with I prefaced with "roughly" are "roughly" ballpark trueish.
Okay, so if it’s 1.5×, 2.0×, or 2.5×… again, what metric? What unit are we dealing with?
You’re claiming that it’s “in the ballpark”, but what is “in the ballpark”? The problem is not one of accuracy, the problem is that it’s made up.
> If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact.
It’s not a fact, it’s ridiculous. You genuinely believe that if somebody disagrees with you, it’s a fact that they lack knowledge and experience? It’s not even remotely possible for somebody to have an informed difference of opinion with you?
>No, but if you’re going to say things like “increase safety by roughly 2x” then if you can’t even identify the unit then you are misleading people.
So when I talk about multipliers I have to have a unit? What is the unit of safety? I can't say something like 2x more safe? I just have to say more safe? What if I want to emphasize that it can DOUBLE safety?
Basically with your insane logic people can't talk about productivity or safety or multipliers at the same time because none of these concepts have units.
Look I told YOU it's anecdotal, EVERYONE can read it. You're no longer "deceived" and no one else is.
>Okay, so if it’s 1.5×, 2.0×, or 2.5×… again, what metric? What unit are we dealing with?
If you don't have the capacity to understand what I'm talking about without me specifying a unit than I'll make one up:
I call it safety units. The amount of errors you catch in production. That's my unit: 1 caught error in prod in a year. For Untyped languages let's say you catch about 20 errors a year. With types that goes down to 10.
>It’s not a fact, it’s ridiculous. You genuinely believe that if somebody disagrees with you, it’s a fact that they lack knowledge and experience? It’s not even remotely possible for somebody to have an informed difference of opinion with you?
What? and you think all opinions are equal and everyone has the freedom to have any opinion they want and no one can be right or wrong because everything is just an opinion? Do all opinions need to be fully respected even though it's insane?
Like my example, if you have the opinion that eating horse shit is healthy, I'm going to make a judgement call that your opinion is WRONG. Lack of Typing is one of these "opinions"
> If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact.
You think it’s impossible for anybody to have an informed opinion that disagrees with yours. You literally think yours is the only possible valid opinion. If that doesn’t set off big warning bells in your head, you are in dire need of a change in attitude.
This conversation is not productive, let’s end it.
>You think it’s impossible for anybody to have an informed opinion that disagrees with yours. You literally think yours is the only possible valid opinion. If that doesn’t set off big warning bells in your head, you are in dire need of a change in attitude.
I mean do you think we should have a fair and balanced discussion about the merits of child molestation and rape? We should respect other people's opinion and not tell them they are wrong if there opinion differs? That's what I think of your opinion. I think your opinion is utterly wrong, and I do think my opinion is the valid opinion.
Now that doesn't mean I disrespect your opinion. That doesn't mean your not allowed to have a different opinion. It just means I tell you straight up, you're wrong and you lack experience. You're free to disagree with that and tell me the exact same thing. I'm just blunt, and I welcome you to be just as blunt to me. Which you have.
The thing I don't like about you is that you turned it into discussion about opinions and the nature of holding opinions. Dude. Just talk about the topic. If you think I'm wrong. Tell me straight up. Talk about why I'm wrong. Don't talk about my character and in what manner I should formulate opinions and what I think are facts.
>This conversation is not productive, let’s end it.
I agree let's end it. But let's be utterly clear. YOU chose to end it with your actions by shifting the conversation into saying stuff like "you literally think yours is the only possible opinion." Bro. All you need to do is state why you think my opinion is garbage and prove it wrong. That's the direction of the conversation, you ended it by shifting it to a debate on my character.
I really love Python for it's expedience, but type hints still feel like they don't belong in the language. They don't seem to come with the benefits of optimisation that you get with static typed languages. As someone who uses C and Julia (and wishes they had time for Rust), introducing solid typing yields better end results at a minimum, or is a requirement at the other end of the scale.
The extra typing clarification in python makes the code harder to read. I liked python because it was easy to do something quickly and without that cognitive overhead. Type hints, and they feel like they're just hints, don't yield enough of a benefit for me to really embrace them yet.
Perhaps that's just because I don't use advanced features of IDEs. But then I am getting old :P
EDIT: also, this massively depends on what you're doing with the language! I don't have huge customer workloads to consider any longer..!
> The extra typing clarification in python makes the code harder to read
It’s funny, because for me is quite the opposite: I find myself reading Python more easily when there are type annotations.
One caveat might be: for that to happen, I need to know that type checking is also in place, or else my brain dismissed annotations in that they could just be noise.
I guess this is why in Julia or Rust or C you have this stronger feeling that types are looking after you.
I think the face they fundamentally don't look after you is where my resistance comes from. Will try and evaluate some newer code that uses them and see how I get on a bit more :)
> They don't seem to come with the benefits of optimisation that you get with static typed languages
They don't. And cannot, for compatibility reasons. Aside from setting some dunders on certain objects (which are entirely irrelevant unless you're doing some crazy metaprogramming thing), type annotations have no effect on the code at runtime. The Python runtime will happily bytecode-compile and execute code with incorrect type annotations, and a type-checking tool really can't do anything to prevent that.
Remembering project where type hints would have been helpful to grok the code I do now mostly like them. They are useful when you come back after days or weeks and try to remember what does this function produce and what does this one actually take in.
And Python always was rather strongly typed, so you anyway had to consider the types. Now you get notes. Which often do help.
> The extra typing clarification in python makes the code harder to read.
It depends what you mean by "read". If you literally mean you're doing a weird Python poetry night then sure they're sort of "extra stuff" that gets in the way of your reading of `fib`.
But most people think of "reading code" and reading and understanding code, and in that case they definitely make it easier.
As someone who has read code as easily as English for decades (which is apparently rare, if my co-workers are any indication), too many type annotations clutter it up and make it a lot harder to read. And this is after having used Typescript a lot in the past year and liking that system - it works well because so much can be inferred.
I enforce strong types on all Python code I’m responsible for - and make sure others don’t play fast and loose with dict[str, Any] when they could use a well defined type.
Doing otherwise is just asking for prod incidents.
I worked on a project that did this. Drove me absolutely nuts. It's like having all the worst parts of a dynamic language and a static language with none of the benefits.
I'd much rather just work in a statically typed language from the start.
I too would much rather work in a statically typed language, but sometimes you have to work with what you’ve got.
These systems are part of the core banking platform for a bank so I’d rather some initial developer friction over runtime incidents.
And I say initial friction because although developers are sometimes resistant to it initially, I’ve yet to meet one who doesn’t come to appreciate the benefits over the course of working on our system.
Different projects have different requirements, so YMMV but for the ones I’m working on type hints are an essential part of ensuring system reliability.
I'm not opposed to type hints, I use them everywhere. It's specially the strict linting.
But it's a fair point. If you truly have no option it's better then absolutely nothing. I really wish people would stop writing mission critical production code in Python.
I feel like it's more often a result of suffering from success that leads to these situations, rather than a lack of foresight to begin with.
For example I work on a python codebase shared by 300+ engineers for a popular unicorn. Typing is an extremely important part of enforcing our contracts between teams within the same repository. For better or for worse, python will likely remain the primary language of the company stack.
Should the founder have chosen a better language during their pre-revenue days? Maybe, but at the same time I think the founder chose wisely -- they just needed something that was _quick_ (Django) and capable of slapping features / ecosystem packages on top of to get the job done.
For every successful company built on a shaky dynamic language, there's probably x10 more companies that failed on top of a perfect and scalable stack using static languages.
Type hints seem fantastic for when you're in maintenance mode and want to add sanity back to a system via automated tooling.
However for new projects I find that I'd much rather pick technologies that start me off with a sanity floor which is higher than Python's sanity ceiling. At this point I don't want to touch a dynamically typed language ever again.
What exactly drove you nuts? The python ecosystem is very broad and useful, so it might be suitable for the application (if not, reasonable that you'd be frustrated). With strict mypy/pyright settings and an internal type-everything culture, Python feels statically typed IME.
It's not even close compared to working with Java or Go or any language built with static typing in mind.
To be clear, I'm not opposed to type hints. I use them everywhere, especially in function signatures. But the primary advantage to Python is speed (or at least perceived speed but that's a separate conversation). It is so popular specifically because you don't have to worry about type checking and can just move. Which is one of the many reasons it's great for prototypes and fucking terrible in production. You turn on strict type checking in a linter and all that goes away.
Worse, Python was not built with this workflow in mind. So with strict typing on, when types start to get complicated, you have to jump through all kinds of weird hoops to make the checker happy. When I'm writing code just to make a linter shut up something is seriously wrong.
Trying to ad typing to a dynamic language in my opinion is almost always a bad idea. Either do what Typescript did and write a language that compiles down to the dynamic one, or just leave it dynamic.
And if you want types just use a typed language. In a production setting, working with multiple developers, I would take literally almost any statically typed language over Python.
But TypeScript erases (its) types at runtime, exactly like Python. Python is Python's TypeScript. Whether you want TS or JS-like semantics is entirely dependent on whether you use a type checker and whether you consider its errors a build breaker.
I'm not sure what you're trying to say here. If you mean Python's type annotations are erased at runtime... Okay? It still has runtime type information. It's not "erasure" as that term applies to Java for example. And Typescript compiles down to JavaScript, so obviously it's runtime behavior is going to be the same as JavaScript.
In my view it's always a mistake to try and tac static typing on top of a dynamic one. I think TS's approach is better than Python's, but still not nearly as good as just using a statically typed language.
The fact that the types are reflected at runtime is what makes FastAPI/Pydantic possible, letting us use Python types to define data models used for serialization, validation, and generating. In TypeScript, we have to use something like Zod, instead of "normal" TypeScript types, because the types are not reflected at runtime.
I think a couple of things have to be untangled here.
The problem we are talking about in both Python and TS comes from the fact that they are (or compile down to) dynamic languages. These aren't issues in statically typed languages... because the code just won't compile it it's wrong and you don't have to worry about getting data from an untyped library.
I don't know a lot about Zod, but I believe the problem you are referring to is more about JavaScript then TS. JavaScript does a LOT of funky stuff at runtime, Python thank God actually enforces some sane type rules at runtime.
My point was not about how these two function at runtime. My point was that if you want to tac static typing onto a dynamic language, Typescripts approach is the better one, but even if can't fix the underlying issues with JS.
You could take a similar approach in Python. We could make a language called Tython, that is statically typed and then compiles down to Python. You eliminate an entire class of bugs at compile time, get a far more reliable experience then the current weirdness with gradual typing and linters, and you still get Pythons runtime type information to deal with things like interopt with existing Python code.
Typescript requires a compiler to produce valid Javascript. Python 3 shoved types into Python 3 without breaking backwards compatibility I think.
You would never have typing.TYPE_CHECKING to check if type checking is being done in TypeScript, for example, because type hints can't break Javascript code, something that can happen in Python when you have cyclic imports just to add types.
I would say mypy is better than nothing but it still misses things sometimes, and makes some signatures difficult or impossible to write. I use it anyway, but patched-on static typing (Erlang, Clojure, and Racket also have it) seems like a compromise from the get-go. I'd rather have the type system designed into the language.
I went from mypy to pyright to basedpyright and just started checking out pyrefly (the OP), and it's very promising. It's written in Rust so it's very efficient.
For the kind of work I'm using Python for (computer vision, ML), not really. The ecosystem isn't there and even when it's possible it would be much less productive for very little gain. Typed Python actually works quite well in my experience. We do use C++ for some hand-written things that need to be fast or use libraries like CGAL, but it has a lot of disadvantages like the lack of a REPL, slow compile times and bad error messages.
Typescript turned me into a believer but my gosh do python typings feel clumsy and quickly busy up files. I get why, and it’s not exactly realistic, but I wish a lot of it didn’t require an import.
Whatever the solution is, it doesn’t include giving up on Python typings.
Yeah post 3.10 you don't need Union, Optional, List, Duct, Tuple. Any still necessary when you want to be permissive, and I'm still hoping for an Unknown someday...
By default, Mypy warns you if try to reassign a method of any object[1]. It will also warn you when you access non-existent attributes[2]. So if you have a variable typed as `object`, the only attributes you can manipulate without the type checker nagging are `__doc__`, `__dict__`, `__module__`, and `__annotations__`. Since there are very few reasons to ever reassign or manipulate these attributes on an instance, I think the `object` type gets us pretty darn close to an "unknown" type in practice.
There was a proposal[3] for an unknown type in the Python typing repository, but it was rejected on the grounds that `object` is close enough.
I am glad they improved this but I still like Optional[], and to a lesser extent, Union[]. It's much more readable to have Optional[str] compared to str | None.
I disagree with `Optional`. It can cause confusion in function signatures, since an argument typed as "optional" might still be required if there is no default value. Basically I think the name is bad, it should be `Nullable` or something.
I believe Python's own documentation also recommends the shorthand syntax over `Union`. Linters like Pylint and Ruff also warn if you use the imported `Union`/`Optional` types. The latter even auto-fixes it for you by switching to the shorthand syntax.
Python types - all the onus of static types, with none of the performance!
I enjoy packages like pydantic and SOME simple static typing, but if I’m implementing anything truly OOP, I wouldn’t first reach for Python anyway; the language doesn’t even do multiple constructors or public/private props.
Edit: as a side note, I was interested to learn that for more verbose type specification, it’s possible to define a type in variable-like syntax at the top: mytype = int|str|list|etc.
There is an important (I would say primary) benefits of types that isn't performance: it's making a program structure [you | your IDE | LLMs] can reason about.
The most annoying part is that the type checking exists outside the regular runtime. I constantly run into situation where the type checker is happy, but the thing explodes at runtime or the type checker keeps complaining about working code. And different type checkers will even complain about different things regularly too. It's a whole lot of work making every part of the system happy and the result still feels extremely brittle.
If you care about micro-optimizations, the first one that overwhelms everything else is to not use Python.
Anyway, if your types are onerous, you are using them wrong. Even more in a progressive type system where you always have the option of not using them or putting an "Any" there.
While most responders seem to have latched on to the pedantics of my “oop” comment - this is what my comment was intended to imply: why force the verbosity of static types on Python when statically typed (sic compiled) languages typically have much better runtime performance? the real answer i suspect is for quality of life/readability of code, but they are just “hints” afterall.
Ruby doesn't have multiple constructors and literally everything in Ruby is an object so it's partically impossible to avoid "doing OOP". I don't see how being "truely OOP" has anything to do with the language supporting method overloads.
So don’t? Just annotate the stuff that’s not annoying. I agree, IO is annoying to type. Decorators are fine, it’s just a function that returns a function. It’s still a win, however much you decide to annotate.
I believe types are a great way to encourage good practices with relatively little investment. They provide type-safety, act as living documentation, and add an extra layer of protection in production.
However, in a large codebase, consistency can become a challenge. Different developers often approach the same problem in different ways, leading to a mix of type patterns and styles, especially when there’s no clear standard or when the problem itself is complex.
With the rise of LLM-generated code, this issue becomes even more pronounced — code quality and craftsmanship can easily degrade if not guided by proper conventions.
I recently discovered that the folks building uv and ruff are also building a type checker, and it already works really well (and fast!), despite beta status:
My experience adding types to un-typed Python code has convinced me that static typing should be required for anything more complicated than a single purpose script. Even in old and battle tested code bases so many tiny bugs and false assumptions are revealed and then wiped out.
It's not perfect in Python, and I see some developers introduce unnecessary patterns trying to make type-"perfect" `class Foo(Generic[T, V])` (or whatever) abstractions where they are not really necessary. But if the industry is really going all-in on Python for more than scripting, so should we for typed Python.
I know I am going to be in the minority, but I don't understand why we can't let Python be Python. Static typing is great, and there are already other statically typed languages for all your needs. Why not use them?
Well, at least it doesn't create two incompatible Pythons like async and (I assume) free threading.
Because Python has a lot of things it's great at (numeric stuff, ML, computer vision, scripting), and with types you can actually rely on it to work. It's the best of both worlds.
I sometimes felt that Python was rather strong in many parts of typing. As such being able to track what type of something is would often have been useful. Instead of waiting it to crash to some error.
Like back in 2.7 difference between byte-array and string... Offloading such cases is mentally useful.
A entire class of bugs, wiped out by a thing called a "compiler". Gigahours of downtime and bug fixing globally prevented by a modest extra step up front. Great stuff.
I had someone say to me they preferred strict type checking in a Python linter over a statically typed language because they "don't like a build step"...
Dudes it's literally just worse compilation with extra steps.
I'm in favor of partitioning the set of reasons it can fail to compile into separate checks with separate tools. Taming the zoo of tooling is extra work, but smaller more focused tools are easier to work with once you understand their relationship to their neighbors.
There's a world of difference between:
> I've been using a different type checker and I like it, you should try it
And
> I'd like to switch our project to a different compiler
Yes, if you are stuck with Python something is certainly better than nothing. But we shouldn't be writing large production apps in it in the first place.
No you don't. You get the illusion of static types without the actual upsides.
For any even medium sized project or anything where you work with other developers a statically typed language is always going to be better. We slapped a bunch of crap on Python to make it tolerable, but nothing more.
I disagree and I've been using Haskell professionally for ten years so I know what I'm talking about when it comes to types. Typed Python isn't perfect but it's totally workable with medium sized projects and gives you access to a great ecosystem.
Everyone knows Haskell is only used for white papers :p
Yeah it's workable, and better than nothing. But it's not better than having an actual static type system.
1. It's optional. Even if you get your team on board you are inevitably going to have to work with libraries that don't use type hints
2. It's inconsistent, which makes sense given that it's tacked onto a language never intended for it.
3. I have seen some truly goofy shit written to make the linter happy in more complex situations.
I honestly think everything that's been done to try to make Python more sane outside outside scripting or small projects (and the same applies to JS and TS) are a net negative. Yes it has made those specific ecosystems better and more useful, but it's removed the incentive to move to better technology/languages actually made to do the job.
I'd say Typescript/JavaScript on the backend are a bad idea across the board. That's not really because of this conversation, just in general.
The comment about Typescript was really about JavaScript. It's a patch on top of JavaScript, which is a shit show and should have been replaced before it ended up forming the backbone of the internet.
Python, typed or otherwise, isn't good for anything past prototyping, piping a bunch of machine learning libraries together, or maybe a small project. The minute the project gets large or starts to move towards actual production software Python should be dropped.
None of those things are to do with typing. Python is slow because it's interpreted, bad at parallelism because of the GIL (which is going away), and bad at deployment because of the messy ecosystem and build tools (although it's pretty good with Nix). Conversely other languages aren't good at those because of static typing.
It is a great choice though for many problems where performance isn't critical (or you can hand the hard work off to a non-Python library like Numpy or Torch). Typing just makes it even better.
In addition to what others have mentioned, it also just makes it easier to come back later to a code base and make changes, especially refactoring. In many cases you don't even really have to add many type hints to get benefits from it, since many popular libraries are more-or-less already well-typed. It can also substitute for many kinds of unit tests that you would end up writing even 5 years ago. If you're an infrastructure engineer or data scientist that's usually just writing a lot of glue code, then it greatly helps speed up your output (I've found)
Without typing it is literally 100x harder to refactor your code, types are like a contract which if are maintained after the refactor gives you confidence. Over time it leads to faster development
I really think that Rust has one of the best designed/inspired type systems.
If I had to rewrite a Python project, I would consider Rust or another statically typed language before choosing to continue in a dynamic language with types bolted on. I hope the situation improves for dynamic languages with optional types, but it still feels weird and bolted onto the language because it is.
I'm a professional .Net Core developer, but I'd throw my hat in the ring for Swift on this one. While obviously not exactly a 1:1 with Rust, there is definitely some common benefits between the two. Though, from what I understand of Rust (very little), its typing system is slightly more strict than Swift's which is slightly more strict than C#'s.
Python's dynamic nature can make it quite difficult to express some things correctly. That, or the type checkers have issues when it comes to understanding what would be considered safe in other languages. Years ago when I knew far less about types and programming, I never had such problems in for example Java. It was sometimes stupid, but I always found a way to express things. Although it could also be, that I merely want more out of inference and and safety. For example recently I wanted a pipeline of steps, but the steps could have any input and output type, as long as that type aligns with the previous step's types and the type checker should also know what the final output type is, and I additionally wanted it to work so that I don't have to add all the steps at once, so that I can construct the pipeline step by step. Tried for hours, but didn't find a working solution that type checks. Also tried with the help of LLMs, which gave superficially looking great code for this, but then there was always some type error somewhere, and they struggled to fix that. Ultimately, I gave up on the type checking between steps and output type of the pipeline, as I realized, that I invested hours into something that might be impossible or way waaay too much work for what I get from it. I would not have spent any time on this without type annotating and would have simply gone with a dynamic solution.
That doesn't sound like it'd have something to do with the dynamic nature of python. Type checking is a static analysis of the source code, so if you'd want something to be inferred dynamically, then you'll have to make use of generics:
I think this pipeline implementation does some things different from what I wanted (but did not precisely describe. It seems that each step is run right away, as it is "added", rather than collected and run when `terminate` is called. Also each step can only consume the result of the previous step, not the results of earlier steps. This can be worked around, by ending the pipeline and then starting multiple pipelines from the result of the first pipeline, if needed. I think you would need to import Generic and write something like `class Pipeline(Generic[T]):` as well? Or is `class Pipeline[T]:` a short form of that?
In my experiment I wanted to get a syntax like this:
pipeline = Pipeline()
...some code here...
pipeline.add_step(Step(...some meta data..., ...actual procedure to run...))
So then I would need generics for `Step` too and then Pipeline would need to change result type with each call of `add_step`, which seems like current type checkers cannot statically check.
I think your solution circumvents the problem maybe, because you immediately apply each step. But then how would the generic type work? When is that bound to a specific type?
> Or is `class Pipeline[T]:` a short form of that?
Yes, since 3.12.
> Pipeline would need to change result type with each call of `add_step`, which seems like current type checkers cannot statically check.
Sounds like you want a dynamic type with your implementation (note the emphasis). Types shouldn't change at runtime, so a type checker can perform its duty. I'd recommend rethinking the implementation.
This is the best I can do for now, but it requires an internal cast. The caller side is type safe though, and the same principle as above applies:
I'm founding a company that is building an AOT compiler for Python (Python -> C++ -> object code) and it works by propagating type information through a Python function. That type propagation process is seeded by type hints on the function that gets compiled:
This sounds even worse than Modular/Mojo. They made their language look terrible by trying to make it look like Python, only to effectively admit that source compatibility will not really work any time soon. Is there any reason to believe that a different take on the same problem with stricter source compatibility will work out better?
Have you talked to anyone about where this flat out will not work? Obviously it will work in simple cases but someone with good language understanding will probably be able to point out cases where it just won't. I didn't read your blog so apologies if this is covered. How does this compiler fit into your company business plan?
Our primary use case is cross-platform AI inference (unsurprising), and for that use case we're already in production by startups to larger co's.
It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.
Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).
I understood some of it. Sounds reasonable if your market already is running a limited subset of the language, but I guess there is a lot of custom bullshit you actually wind up maintaining.
I was extremely skeptical when typing was introduced. What's the point if runtime ignores them. I forced myself to use them as documentation and now I'm on the other side of the spectrum, all code should have them. It already helped me a lot during refactors and I always wished more code had types. I think the same is true for AIs, they will benefit from having the types explicitly said. It's a shame they currently default to untyped Python.
Ideally, with static checking, the runtime shouldnn’t need to care about types because cide that typechecks shouldn’t be capable of not behaving according to the types declared.
Python, even with the most restrictive settings in nost typecheckers, may not quite achieve that, but it certainly reeuces the chance of surprises lf that kind compared to typing information in docstrings, or just locked away in the unststed assumptions of some developer.
The thing is, when typing was being discussed, I was hoping it would lead to JavaScript-like evolution, where the dynamic nature of Python could be restricted if I use the right types, and a JIT compiler could optimize parts of the code, expecting u32 ints instead of PyObjects.
Type hints in Python add a great amount of visual noise to the code, and I actively avoid them wherever possible. If static typing is a must, use a language where static typing is not an afterthought, and let Python be Python.
Would you rather deal with a little visual noise or a runtime exception that you could've caught before code got to production? For me it's about tradeoffs, and so far the tradeoff has been well worth it.
Indeed, I almost can't read untyped python code these days. It just feels like "what the hell is going on here?" and "what is this object?" ever so often. Sorry to say that but most people who write python just aren't good API designers, or software engineers in general, and type hints can at least help others get a vague idea of what the intent was.
My view on typing in Python (a language I have used for decades) is that if I wanted types I would use a language designed from the ground up with strong and consistent typing built-in. Not bolt on a sort of type system which actively fights against the way I use the language on a day to day basis.
I use plenty of statically typed languages, Python's type hinting does not bring me joy.
Ok, so this is just one of many examples but the most immediate one is where I don't care about the immutable sanctity of the variable I have just declared.
I often use Python for data munging and I'll frequently write code that goes
Where the type of the value being assigned to foo is different each time. Now, obviously (in this simplistic example that misses subtleties) I could declare a new variable for each transformation step or do some composite type building type thing or refactor this into separate functions for each step that requires a different type but all of those options are unnecessary busy work for what should be a few simple lines of code.
Thanks, now I get why you feel like the type system is fighting your style of programming.
> all of those options are unnecessary busy work for what should be a few simple lines of code
If you re-type your variable often, then how do you make sure you’re really keeping track of all those types?
If you re-type it only a few times, then I’m not entirely convinced that declaring a few additional variables really constitutes busywork.
Small example with additional variables instead of re-typing the same variable:
# pylint: disable=disallowed-name, missing-function-docstring, missing-module-docstring, redefined-outer-name
from typing import NewType
NEEDS_CHECKING = True
NotCleaned = NewType("NotCleaned", str)
Checked = NewType("Checked", str)
Cleaned = NewType("Cleaned", str)
original_foo = ["SOME ", "dirty ", " Data"]
annotated_foo = [NotCleaned(item) for item in original_foo]
cleaned_foo = [
Cleaned(item.lower().strip().replace("dirty", "tidy"))
for item in annotated_foo
]
foo: list[Checked | Cleaned]
if NEEDS_CHECKING:
for idx, item in enumerate(cleaned_foo):
if item and (item[0] == " " or item[-1] == " "):
raise RuntimeError(f"Whitespace found in item #{idx}: {item=}")
if "dirt" in item:
raise RuntimeError(f"Item #{idx} is dirty: {item=}")
foo = [Checked(item) for item in cleaned_foo]
else:
foo = list(cleaned_foo)
print(foo)
# => ['some', 'tidy', 'data']
This survives strict type checking (`mypy --strict`). I don’t feel that renaming the variables introduces much noise or busywork here? One might argue that renaming even adds clarity?
The biggest reason I use typehints is that VSCode's intellisense relies on them - and I know I've missed one when typing a dot doesn't give me the method I'm expecting.
I hate typing in Python. I spend a good chunk of my day fighting the type checker and adding meaningless assertions, casts, and new types all to satisfy what feels like an obsessive compulsive nitpicker. "Type partially unknown" haunts my dreams.
Duck typing is one of the best things about Python. It provides a developer experience second to none. Need to iterate over a collection of things? Great! Just do it! As long as it is an iterable (defined by methods, not by type) you can use it anywhere you want to. Want to create a data object that maps any hashable type to just about anything else? Dict has you covered! Put anything you want in there and don't worry about it.
If we ended up with a largely bug free production system then it might be worth it, but, just like other truly strongly typed languages, that doesn't happen, so I've sacrificed my developer experience for an unfulfilled promise.
If I wanted to use a strongly typed language I would, I don't, and the creeping infection of type enforcement into production codebases makes it hard to use the language I love professionally.
Couldn't agree more! I've been using Python for almost 20 years, my whole career is built on it, and I never missed typing.
Code with type hints is so verbose and unpythonic, making it much harder to read. Quite an annoying evolution.
As the article says, type hints represent a fundamental change in the way Python is written. Most developers seem to prefer this new approach (especially those who’d rather be writing Java, but are stuck using Python because of its libraries).
However it is indeed annoying for those of us who liked writing Python 2.x-style dynamically-typed executable pseudocode. The community is now actively opposed to writing that style of code.
I don’t know if there’s another language community that’s more accepting of Python 2.x-style code? Maybe Ruby, or Lua?
There is nothing python-2 about my python-3 dynamically typed code. I'm pretty confident a majority of new python code is still being written without type hints.
Hell, python type annotations were only introduced in python 3.5, the language was 24 years old by then! So no, the way I write python is the way it was meant to be, type hints are the gadget that was bolted on when the language was already fully matured, it's pretty ridiculous painting code without type hints as unpythonic, that's the world upside down.
If I wanted to write very verbose typed code I would switch to Go or Rust. My python stays nimble, clean and extremely readable, without type hints.
> Hell, python type annotations were only introduced in python 3.5
Mypy was introduced with support for both for Python 2.x and 3.x (3.2 was the current) using type comments before Python introduced a standard way of using Python 3.0’s annotation syntax for typing; even when type annotations were added to Python proper, some uses now supported by them were left to mypy-style type comments in PEP 484/Python 3.5, with type annotations for variables added in PEP 526/Python 3.6.
I agree completely! To be clear, I don’t consider describing code as “Python 2-style” to be a bad thing. It’s how I describe my own Python code!
Overall, I have found very few Python 3 features are worth adopting (one notable exception is f-strings). IMO most of them don’t pull their weight, and many are just badly designed.
> Duck typing is one of the best things about Python.
And duck typing with the expected contract made explicit and subject to static verification (and IDE hinting, etc.) is one of the best things about Python typing.
> If we ended up with a largely bug free production system then it might be worth it, but, just like other truly strongly typed languages, that doesn't happen
I find I end up at any given level of bug freeness with less effort and time with Python-with-types than Python-without-types (but I also like that typing being optional means that its very easy to toss out exploratory code before settling on how something new should work.)
Beyond the advantage that a type-checker/linter can tell if you're doing the right thing when writing those functions, it lets an IDE infer what type you're iterating over, in order to provide more support/completion/hinting/checks (without recursively analyzing arbitrary code, so: 'instantly' vs 'maybe not ever).
It's not quite all or nothing, but it's annoying to work with it if you only use it for some things and not for others. I find that if you have a mixture of TS and JS in various files I would rather just go all in on TypeScript so I don't have to manually annotate.
With Python you're still just working with Python files.
Type hints in python are just that, hints. Use them to help with clarity but enforcing them and requiring them everywhere generally leads to the worst of all worlds. Lots of boilerplate, less readable code and throwing away many of the features that make python powerful. Use the best language for the job and use the right language features at the right time. I see too many black or white arguments in the developer community, there is a middle ground and the best code is almost always written there.
> Lots of boilerplate, less readable code and throwing away many of the features that make python powerful.
IMO, that complaint almost always goes with overuse of concrete types when abstract (Protocol/ABC) types are more accurate to the function of the code.
There was a time that that was a limitation in Python typing, but that that hasn’t been true for almost as long as Python typing had been available at all before it stopped being true.
Is it me or is everything slowly moving to strong types but don't want to commit?
For PHP it slowly got introduced in php5.4 and now it's expected to type hint everything and mark the file strict with linters complaining left and right that "you're doing a bad job by having mixed-type variables"
In Ruby you get Sorbet or RBS.
What is JavaScript? Oh, you mean TypeScript.
and so on ..
My take is that if you need strong types switch to a language with strong types so you can enjoy some "private static final Map<String, ImmutableList<AttemptType>> getAttemptsBatches(...)"
in general I see two types of Python in the wild:
- simple self-contained scripts, everything is a free function, no type hints
- over-engineered hierarchies of classes spread over dozens of files and modules, type hints everywhere
I personally largely prefer the first kind, but it seems even the standard formatting rules are against it (two empty lines between free functions etc.)
Type hints in dynamic languages are great, but I wish they came with deeper integration into the language runtime for validation and for optimizer setup.
If I have a function that takes an int, and I write down the requirement, why should a JIT have to learn independently of what I wrote down that the input is an int?
I get that it's this way because of how these languages evolved, but it doesn't have to stay this way.
That was the original intent of mypy, to allow a subset of Python to be interpreted by a JIT or transpiled to a compiled, statically typed language.
The type hints proved to be useful on their own so the project moved past what was useful for that purpose, but a new JIT (such as the one the upcoming CPython 3.14 lays the groundwork for) could certainly use them.
For me, type hints are mainly useful because they're the only reliable way to get decent IDE auto-completion. Beyond that, they feel like a bolted-on compromise that goes against the spirit of Python. If you really need strict typing, you're probably better off using a statically typed language.
JSDoc plays a similar role with Javascript. Moreover it is supported out of the box by VSCode, so add a few JSDoc comments to your types and functions, and intellisense instantly kicks in.
I liked python in my early days because it felt simple and easy and when I tried other languages having to deal with types felt so annoying... but then I grew and had to work with bigger codebases and guess what - having types (and static type checking during compilatin) helps A LOT... :)
Has anyone had good luck with auto-annotation of types of existing codebases? Either via LLM or via various runtime hooks? I work in a codebase that started off in Python 2 and isn't annotated with types in the majority of places, and I feel the pain every time I have to wonder what exactly the arguments to a method is.
Yep, I've used this pretty successfully, the ideal is to run it under realistic prod traffic over time to capture as many types as can flow into a given function, but a good set of unit/integration tests can also provide good coverage.
And if you re-use the same type store (SQLite DB) across multiple instrumented runs, you can further improve it.
I think I've always used Python type hints, but that was partially because early versions of Discord.py relied on them (maybe it still does). But it was also because I like to be able to mentally verify my code's correctness before running it, and waiting for runtime errors is comparatively a huge waste of time (in my opinion).
I'm building my own coding agent, like Claude, and it is built with opinionated style. Strongly typing Python and using beartype are what it will try to do unless the user specifies otherwise.
Misleading title. I was expecting to read about why devs actually embrace type hints. What the article is about is why you should and how you can use type hints. That's valuable, but different from what the title suggests.
Besides, the lack of static typic is what makes a lot of the appeal for beginners. It's much harder to convince a non-CS beginner why they should bother with the extra burden of type hits. They are optional anyway and just slow folks down (so they might think). Careful with generally demanding that everybody use them.
But they probably help coding assistants to make fewer mistakes, so maybe that will soon be an argument if it isn't already. (That's an angle I expected in the article.)
The reason to use type hints is simple: it vastly improves scalability, making LLM agents much less error prone. Try it: tell Claude to type hint every function (e.g., in your Claude.md file) and see how much easier it is to scale your agents.
This also works for humans, but many python programmers who learned python before type hints can't be bothered. :sad_panda:
Boring old me would be reaching for mojo in order to have types that actually are "real" rather than just an editing overlay of decorators/DSL/tooling.
Too bad Mojo gave up on Python compatibility on code level. Now it’s just one of a dozen “Python inspired” languages, with the only benefit that they aim for easily calling into other Python code.
It's the other way around, IMO: people who think the language should at least have type hints are now more willing to use it, now that there's better tooling for checking those hints.
I’d love to see an analysis of how much typing hurts LLMs that need to read / edit your code (due to increased context) vs helps (due to more clear type context).
I want to believe that corrected typed python code is easier for smaller models to generate / interact with, but who knows how the trade-offs actually work out.
Types are invaluable in modern code bases because they end up saving a ton of tokens when agentic coding tools are trying to comprehend let alone modify the code. Python isn't intrinsically a great language for LLMs to work with, except in practice it is because they've had a great deal of training data for python. Type hints help a lot with this.
I like the type hints. The're not perfect and they've changed a lot between versions, but they really help catch issues early that you'd usually need to write unit tests for. Adding type hints is easier than writing those unit tests.
Then you can focus your tests on more interesting things
You just need to set your build up to actually do the checking as type hints by default are just documentation
Yeah, it would have been much better to have them be default enforces if present. Keeping them optional is fine, but I don't get the use-case for "you can add them but not check them"... that just leads to actively misleading hints
Use them for what they are (hints, documentation). Use it for gradual typing when implementation makes it hard to understand return or parameters types. But don't enforce it across your code base, use another language or another mindset instead.
How do type hints work if for example you import a library that has not implemented type hints into your project in which you hope to have type hints? Do you just manually assign types to the outputs of this library?
I like it but in my experience a lot of teams use them loosely without a type checker, more for understanding than for correctness. Reason for that is largely that it can be difficult to make the checkers happy…
This is why I think Julia will win in the long run. It has an amazing type system, simple yet powerful. In particular, abstract types are much easier to define and use than abstract classes in Python.
Ooh, I completely disagree. Julia has a worse type system overall, IMO.
The big downside of Julia is that Julia has no interfaces or protocols. So, you can't type assert that something is an iterable of integers, for example.
Another issue is that abstract types are completely undocumented and have no tooling support. You say it's easier to use an abstract type. Can you tell me what I need to define to create a working subtype of AbstractDict? Or Number? Or IO? It's completely undefined, and the only way to do it is to just define the type and then try it out and patch when it breaks because a method was missing.
Finally, there is no multiple inheritance. That means I can't define something which is both a subtype of AbstractArray and IO, for example.
There are no traits in Julia by default, that's true. But since types are first class citizens in Julia, traits can be implemented within the language. There is a package SimpleTraits.jl that implements Holy's trait trick, see also this tutorial https://ahsmart.com/pub/holy-traits-design-patterns-and-best...
An ability to work with types within the language is already a win for me.
I doubt that Meta (the company that sponsors the work on pyrefly) is looking forward to selling a product based on Python typing (assuming that's what's "what's being glazed in the article").
I used python on a large code base for quite a while. Many team members did not like type hints, and a codebase that doesn't maintain type hints makes it harder to use them.
However, if I had a choice, rather than use typehints in python, I would much rather just use a statically typed language. Short, tiny scripts in python? Sure. Anything that grows or lives a long time? Use something where the compiler helps you out.
It's the developer performance benefit of catching type bugs early, not the application performance benefit from a compiler, that Python developers find compelling
I'm always surprised when people suggest using a different language if you want typing in Python. Python's (second?) largest appeal is probably its extensive ecosystem. Whenever people suggest just changing languages, I wonder if they work in isolation, without the need for certain packages or co-worker proficiency in that language.
I think people usually say it for a different reason.
Types are not enforced. You can annotate your code that looks correct to the type checker, but the actual data flow at runtime can be with different types.
And it happens quite often in large codebases. Sometimes external dependencies report wrong types, e.g., a tuple instead of a list. It's easy to make such a mistake when a library is written in a compiled language and just provides stubs for types. Tuples and lists share the same methods, so it will work fine for a lot of use cases. And since your type checker will force you to use a tuple instead of a list, you will never know that it's actually a list that can be modified unless you disable type checking and inspect the data.
To be pedantic compiled languages only check types at compile time as well. If you have a C library that takes void* then it can easily go wrong at runtime.
Typing has only been around since python 3.5. As someone who has formally learned 2.7 in university when 3.0 had already been around for a few years, I suppose there are many who still lag years behind what the language can do due to old codebases and fears of incompatibility.
Without significant language changes, this is not possible. While your code may be typed as an int, I can simply redefine what int means. I can also modify the code in your method.
I guess it would work with the ongoing jit work, which (as far as I understood..) run the code "as usual", then notice that a specific variable is always a dict (or whatever). Then it patches the code to run the dict-optimized code by default (and fallback to the generic code if, somehow, the variable is no longer a dict).
With typing, the generic code could be avoided altogether. The algorithm would be:
- notice that some variable can be processed by a dict-optimized code (because its typing is a dict, or something that looks like a dict etc)
- when processing, check that the variable is indeed a "dict", raise an exception if not
- run the optimized code
- if the typing information changes (because the class has been redefined and the variable is no longer a "dict"), then go to step 1 and either stick with the current optimized code, use another one, or use the generic one
This would:
- enforce types (you said that variable is a Thing but a Thing was not given: exception)
- improve the jit by removing the bootstrap phase (where the jit watches and then try to guess what could be improved)
(or perhaps this is a stupid idea that cannot work :) )
Type hints are much easier to use nowadays than they were a few years ago, because the agentic tools like Claude Code are very good at converting an existing codebase to using type hints.
The flip side of it is that Claude Code will have a very bad time in a code base with grossly unsatisfiable or conflicting types (where a type checker would fail the project). A human should always first ensure that the types are broadly correct, with or without the assistance of code tools.
I can empathize with the code tools. Sometimes I’ll read Python code and have no idea at first glance if these are type bugs or creative coding by the dev. Python is incredibly flexible. Though I think most of the time you really shouldn’t be using the flexibility.
Because they follow any corporate initiative that gives them something to do, even if the type hints are the most unreadable and hackish form of typing in existence.
I do not program that much in python, but I believe the general accepted wisdom in dynamic languages was explicit name and load of documentations (as comments and docstrings).
In Python, every variable is either defined or imported in the file in which it's used, so you always know where to find it. (Assuming you don't do `from foo import *`, which is frowned upon.)
In C++, a variable might be defined in a header or in a parent class somewhere else, and there's no indication of where it came from.
How does this help when trying to determine the parameters a function takes? You have to either hope that the name is descriptive enough or that the function is well-documented. Failing that, you need to read the code to find out.
Ruby has had static typing via RBS for a while now, and I don't know if it's because I'm primarily a Rails developer and DHH doesn't like static typing so using these with Rails feels third-class or maybe just that I'm really just all the way in "the ruby way" but it feels antithetical to a dynamically typed language to start shoehorning in static types. Even as a type definition in a separate file it just feels wrong.
Ruby particularly is already strongly typed so there isn't too many suprises with automatic conversions or anything like that. RBS also just makes metaprogramming more annoying - and Ruby's ability to do metaprogramming easily is one of its biggest strengths in my opinion.
If I wanted a statically typed language I would just use a statically typed language.
I've been working with Python for years (since 2014) and typing makes the code less buggy and easier to maintain. I also would hardly call it "shoehorning", as years of design went into it.
Honestly the only people I see who really push back against it are the people who haven't bothered learning it. Once people use it for a bit, in my experience at least, they don't want to go back.
Maybe it's just because Python is just kindof a lousy language to use in the first place. I started with Java and C++, did Python for a bit and switched to Ruby and never looked back. Being forced to use Python for anything feels like a punishment.
Years of design also went into Ruby's type system, and for the people that enjoy it - be my guest - but I would never use it for my own code.
With duck typing, you don’t need type hints. You’re just supposed to use the variable however you “feel” it should be used when you’re using it, and it automagically just works!!! Try it! /s
Python developers are embracing RuntimeError..
It's amazing language that had it place, now it's time to sunset it and forget it. Leave it to prototyping only
I actually don’t like python type hints!
At my work we have a jit compiler that requires type hints under some conditions.
Aside from that, I avoid them as much as possible. The reason is that they are not really a part of the language, they violate the spirit of the language, and in high-usage parts of code they quickly become a complete mess.
For example a common failure mode in my work’s codebase is that some function will take something that is indexable by ints. The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types! And you better believe that every single admissible type will eventually be fed to this function. Sometimes people will try to keep up, annotating it with a Union of the (growing) list of admissible types, but eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
So, if the jit compiler needs the annotation I am happy to provide it, but otherwise I will proactively not provide any, and I will sometimes even delete existing annotations when they are devolving into silliness.
That's the same complaints people had about TypeScript in the beginning, when libraries such as Express used to accept a wide range of input options that would be a pain to express in types properly. If you look at where the ecosystem is now, though, you'll see proper type stubs, and most libraries get written in TS in the first place anyway. When editing TS code, you get auto-completion out of the box, even for deeply nested properties or conditional types. You can rely on types being what the compiler says they are, and runtime errors are a rarity now (in properly maintained code bases).
> The reason is that they are not really a part of the language, they violate the spirit of the language, and in high-usage parts of code they quickly become a complete mess.
I'll admit that this is what I hate Python, and it's probably this spirit of the language as you call it. I never really know what parameters a function takes. Library documentation often shows a few use cases, but doesn't really provide a reference; so I end up having to dig into the source code to figure it out on my own. Untyped and undocumented kwargs? Everywhere. I don't understand how someone could embrace so much flexibility that it becomes entirely undiscoverable for anyone but maintainers.
Because the flexibility has been a boon and not a problem. The problem only comes when you try to express everything in the type system, that is third party (the type checkers for it) and added on top.
It's a boon if the goal is to write code then go home. It's a loaded footgun if the goal is to compose a stack and run it in production within SLO.
Python type hints manage to largely preserve the flexibility while seriously increasing confidence in the correctness, and lack of crashing corner cases, of each component. There's really no good case against them at this point outside of one-off scripts. (And even there, I'd consider it good practice.)
As a side bonus, lack of familiarity with Python type hints is a clear no-hire signal, which saves a lot of time.
I think with types there is a risk of typing things too early or too strictly or types nudging one to go in a direction, that reduces the applicability and flexibility of the final outcome. Some things can be difficult to express in types and then people choose easier to type solutions, that are not as flexible and introduce more work later, when things need to change, due to that inflexibility or limited applicability.
People say this all the time, but I've never seen any data proving it's true. Should be rather easy too, I'm at a big company and different teams use different languages. The strictly typed languages do to have fewer defects, and those teams don't ship features any faster than the teams using loosely typed languages.
What I've experienced is that other factors make the biggest difference. Teams that write good tests, have good testing environments, good code review processes, good automation, etc tend to have fewer defects and higher velocity. Choice of programming language makes little to no difference.
>It's a boon if the goal is to write code then go home. It's a loaded footgun if the goal is to compose a stack and run it in production within SLO.
Never has been an issue in practice...
Did you forget /s at the end of this?
I work at big tech and the number of bad deploys and reverts I've seen go out due to getting types wrong is in the hundreds. Increased type safety would catch 99% of the reverts I've seen.
Also have fun depending on libraries 10 years old as no one likes upgrades over fear of renames.
Ops type here, I’ve got multiple stories where devs have screwed up with typing and it’s caused downstream problems.
> Because the flexibility has been a boon and not a problem
Well, you could say that the problem in this case was the lack of documentation, if you wanted. The type signature could be part of the documentation, from this point of view.
Let me give a kind-of-concrete example: one year I was working through a fast.ai course. They have a Python layer above the raw ML stuff. At the time, the library documentation was mediocre: the code worked, there were examples, and the course explained what was covered in the course. There were no type hints. It's free (gratis), I'm not complaining. However, once I tried making my own things, I constantly ran into questions about "can this function do X" and it was really hard to figure out whether my earlier code was wrong or whether the function was never intended to work with the X situation. In my case, type hints would have cleared up most of the problems.
> the lack of documentation
If the code base expects flexibility, trusting documentation is the last thing you'd want to do. I know some people live and die by the documentation, but that's just a bad idea when duck typing or composition is heavily used for instance, and documentation should be very minimal in the first place.
When a function takes a myriad of potential input, "can this function do X" is an answer you get by reading the function or the tests, not the prose on how it was intended 10 years ago or how some other random dev thinks it works.
Documentation doesn’t have to be an essay. A simple, automatically generated reference with proper types goes a long way to tell me „it can do that“ as opposed to „maybe it works lol“. That’s not the level of engineering quality I’m going for in my work.
This whole discussion is about how you might not want to be listing every single types a function accepts. I also kinda wonder how you automatically generate that for duck typing.
Generally using the Protocol[1] feature
This of course works with dunder methods and such. Also you can annotate with @runtime_checkable (also from typing) to make `isinstance`, etc work with it[1]: https://typing.python.org/en/latest/spec/protocol.html
You're then creating a Protocol for every single function that could rely on some duck typing.
Imagine one of your function just wants to move an iterator forward, and another just wants the current position. You're stuck with either requiring a full iterator interface when only part of it is needed or create one protocol for each function.
In day to day life that's dev time that doesn't come back as people are now spending time reading the protocol spaghetti instead of reading the function code.
I don't deny the usefulness of typing and interfaces in stuff like libraries and heavily used common components. But that's not most of your code in general.
For the collections case in particular, you can use the ABCs for collections that exist already[1]. There's probably in your use case that satisfies those. There's also similar things for the numeric tower[2]. SupportsGE/SupportsGT/etc should probably be in the stdlib but you can import them from typeshed like so
---In the abstract sense though, most code in general can't work with anything that quack()s or it would be incorrect to. The flip method on an penguin's flipper in a hypothetical animallib would probably have different implications than the flip method in a hypothetical lightswitchlib.
Or less by analogy, adding two numbers is semantically different than adding two tuples/str/bytes or what have you. It makes sense to consider the domain modeling of the inputs rather than just the absolute minimum viable to make it past the runtime method checks.
But failing that, there's always just Any if you legitimately want to allow any input (but this is costly as it effectively disables type checking for that variable) and is potentially an indication of some other issue.
[1]: https://docs.python.org/3.14/library/collections.abc.html
[2]: https://docs.python.org/3/library/numbers.html
> You're then creating a Protocol for every single function that could rely on some duck typing.
No, you are creating a Protocol (the kind of Python type) for every protocol (the descriptive thing the type represents) that is relied on for which an appropriate Protocol doesn’t already exist. Most protocols are used in more than one place, and many common ones are predefined in the typing module in the standard library.
Except Typescript embraces duck typing. You can say "accept any object with a quack() method", for example, and it'll accept an unexpected quacking parrot. It can even tell when two type definitions are close enough and merge them.
So does Python. They're called protocols. [0]
[0]: https://typing.python.org/en/latest/spec/protocol.html
> Except Typescript embraces duck typing.
So does Python:
https://typing.python.org/en/latest/spec/protocol.html
It's not duck typing if you have to declare the type...
Kind of, depends on the compiler configuration.
Doesn't Go also use structural typing?
I like Python a lot, and have been using it for personal projects since about 2010. It was only once I started working and encountering long-lived unfamiliar Python codebases regularly that I understood the benefits of type hints. It's not fun to have to trace through 5 or 6 different functions to try to figure out what type is being passed in or returned from something. It's even less fun to find out that someone made a mistake and it's actually two different incompatible things depending on the execution path.
That era of Python codebases were miserable to work in, and often ended up in the poorly though out "we don't know how this works and it has too many bugs, let's just rewrite it" category.
> It's not fun to have to trace through 5 or 6 different functions to try to figure out what type is being passed in or returned from something.
My position is that what is intended must be made clear between type hints and the docstring. Skipping this makes for difficult to read code and has no place in a professional setting in any non-trivial codebase.
This doesn't require type hints to achieve. :param and :rtype in the docstring are fine if type hints aren't present, or for complex cases, plain English in the docstring is usually better.
:param and :rtype are type hints, just type hints that cannot be validated by tooling and are guaranteed to go out of sync with the code eventually.
Proper type hints are typically very easy to add if the codebase is not a mess that passes things around far and wide with no validation. If it is, the problem is not with the type hints.
I agree, although I've found that correct and comprehensive use of the doctoring for this purpose has not existed in the environments I've worked in, or the open source codebases I have needed to understand. Something about type hinting makes people more likely to do it.
I am sorry, but whats wrong with doing something like, `print(type(var)); exit()` and just running it once instead of digging through 5-6 stack frames?
Sometimes a function's input or return type can vary depending on the execution path? Also, inserting print statements is often not practical when working on web backend software which is kind of a big thing nowadays. If you can run the service locally, which is not a given, dependencies get mocked out and there's no guarantee that your code path will execute or that the data flowing through it will be representative.
They don’t violate the spirit of the language. They are optional. They don’t change the behaviour at runtime.
Type annotations can seem pointless indeed if you are unwilling to learn how to use them properly. Using a giant union to type your (generic) function is indeed silly, you just have to make that function generic as explained in another comment or I guess remove the type hints
> They don’t violate the spirit of the language. They are optional.
That in itself violates the spirit of the language, IMO. “There should be one obvious way to do it”.
Well, precisely:
- There is one obvious way to provide type hints for your code, it’s to use the typing module provided by the language which also provides syntax support for it.
- You don’t have to use it because not all code has to be typed
- You can use formatted strings, but you don’t have to
- You can use comprehensions but you don’t have to
- You can use async io, but you don’t have to. But it’s the one obvious way to do it in python
The obvious way to annotate a generic function isn’t with a giant Union, it’s with duck typing using a Protocol + TypeVar. Once you known that, the obvious way is… pretty obvious.
The obvious way not be bothered with type hints because you don’t like them is not to use them!
Python is full of optional stuff, dataclasses, named tuples, meta programming, multiple ancestor inheritance. You dont have to use these features, but there are only one way to use them
> but there are only one way to use them
Optional nature of those features conflicts with this statement. As optionality means two ways already.
classes are optional in python, does that violate the spirit?
"There should only be one way to do it" has not really been a thing in Python for at least the last decade or longer. It was originally meant as a counterpoint to Perl's "there's more than one way to do it," to show that the Python developers put a priority on quality and depth of features rather than quantity.
But times change and these days, Python is a much larger language with a bigger community, and there is a lot more cross-pollination between languages as basic philosophical differences between the most popular languages steadily erode until they all do pretty much the same things, just with different syntax.
> "There should only be one way to do it" has not really been a thing in Python for at least the last decade or longer.
It never was a thing in Python, it is a misquote of the Zen of Python that apparently became popular as a reaction against the TMTOWTDI motto of the Perl community.
Not misquoted, paraphrased. I didn't feel like bothering to check the output of "import this" before posting.
the whole language violates this principle tbh, so it's very in spirit
Yeah that ship sailed some time before they added a third way to do templated string interpolation.
How so? There is one way to do it. If you want typing, you use type hints. You wouldn't say that, say, functions are unpythonic because you can either use functions or not use them, therefore there's two ways to do things, would you?
And Python failed at that decades ago. People push terribly complicated, unreadable code under the guise of Pythonic. I disagree with using Pythonic as reasoning for anything.
This is a popular misquote from the Zen of Python. The actual quote is “There should be one—and preferably only one—obvious way to do it.”
The misquote shifts the emphasis to uniqueness rather than having an obvious way to accomplish goals, and is probably a result of people disliking the “There is more than one way to do it” adage of Perl (and embraced by the Ruby community) looking to the Zen to find a banner for their opposing camp.
And Python failed at that decades ago. People push terribly complicated, unreadable code under the guise of Pythonic. I disagree with using Pythonic as reasoning for anything.
on that note, which is better, using `map()` or a generator expression?
Actually in Python it can. Since the type hints are accessible at runtime, library authors can for example change which values in kwargs are allowed based on the type of the argument.
So on the language level it doesn’t directly change the behavior, but it is possible to use the types to affect the way code works, which is unintuitive. I think it was a bad decision to allow this, and Python should have opted for a TypeScript style approach.
You can make it change the behaviour at runtime is different than it changes the behaviour at runtime I think?
Lots of very useful tooling such as dataclasses and framework like FastAPI rely on this and you're opinion is that it's a bad thing why?
In typescript the absence of type annotations reflection at runtime make it harder to implement things that people obviously want, example, interop between typescript and zod schemas. Zod resorts instead to have to hook in ts compiler to do these things.
I'm honestly not convinced Typescript is better in that particular area. What python has opted for is to add first class support for type annotations in the language (which Javascript might end up doing as well, there are proposals for this, but without the metadata at runtime). Having this metadata at runtime makes it possible to implement things like validation at runtime rather than having to write your types in two systems with or without codegen (if Python would have to resort to codegen to do this, like its necessary in typescript, I would personally find this less pythonic).
I think on the contrary it allows for building intuitive abstractions where typescript makes them harder to build?
Yeah, but then you get into the issues with when and where generic types are bound and narrowed, which can then make it more complicated, at which point one might be better off stepping back, redesigning, or letting go of perfect type hint coverage, for dynamic constructs, that one couldn't even write in another type safe language.
I don’t know anything about your jit compiler, but generally the value I get from type annotations has nothing to do with what they do at runtime. People get so confused about Python’s type annotations because they resemble type declarations in languages like C++ or Java. For the latter, types tell the compiler how to look up fields on, and methods that apply to, an object. Python is fine without that.
Python’s types are machine-checkable constraints on the behavior of your code.. Failing the type checker isn’t fatal, it just means you couldn’t express what you were doing in terms it could understand. Although this might mean you need to reconsider your decisions, it could just as well mean you’re doing something perfectly legitimate and the type checker doesn’t understand it. Poke a hole in the type checker using Any and go on with your day. To your example, there are several ways described in comments by me and others to write a succinct annotation, and this will catch cases where somebody tries to use a dict keyed with strings or something.
Anyway, you don’t have to burn a lot of mental energy on them, they cost next to nothing at runtime, they help document your function signatures, and they help flag inconsistent assumptions in your codebase even if they’re not airtight. What’s not to like?
So the type is anything that implements the index function ([], or __getitem__), I thnink that's a Sequence, similar to Iterable.
>from typing import Sequence
>def third(something: Sequence):
> return indexable[3]
however if all you are doing is just iterate over the thing, what you actually need is an Iterable
>from typing import Iterable
>def average(something:Iterable):
> for thing in something:
> ...
Statistically, the odds of a language being wrong, are much lower than the programmer being wrong. Not to say that there aren't valid critiques of python, but we must think of the creators of programming languages and their creations as the top of the field. If a 1400 chess elo player criticizes Magnus Carlsen's chess theory, it's more likely that the player is missing some theory rather than he found a hole in Carlsen's game, the player is better served by approaching a problem with the mentality that he is the problem, rather than the master.
> So the type is anything that implements the index function ([], or __getitem__), I thnink that's a Sequence
Sequence involves more than just __getitem__ with an int index, so if it really is anything int indexable, a lighter protocol with just that method will be more accurate, both ar conveying intent and at avoiding needing to evolve into an odd union type because you have something that a satisfies the function’s needs but not the originally-defined type.
> we must think of the creators of programming languages and their creations as the top of the field
The people at the top of the type-system-design field aren’t working on Python.
That is sort of ironic because the Pythonistas did not leave out any opportunity to criticize Java. Java was developed by world class experts like Gosling and attracted other type experts like Philip Wadler.
No world class expert is going to contribute to Python after 2020 anyway, since the slanderous and libelous behavior of the Steering Council and the selective curation of allowed information on PSF infrastructure makes the professional and reputational risk too high. Apart from the fact that Python is not an interesting language for language experts.
Google and Microsoft have already shut down several failed projects.
>"Guido: Java is a decent language," 1999
I get the idea that Python and Java went in opposite directions. But I'm not aware of any fight between both languages. I don't think that's a thing either.
Regarding stuff that happens in the 2020. Python was developed in the 90s, python 3 was launched in 2008. Besides some notable PEPs like type hints, WSGI, the rest of development are footnotes. The same goes for most languages (with perhaps the exception of the evergrowing C++), languages make strong bc guarantees and so the bulk of their innovation comes from the early years.
Whatever occurs in the 20th and 30th year of development is unlikely to be revolutionary or very significant. Especially ignoreable is the drama that might emerge in these discussions, slander, libel inter-language criticism?
Just mute that out. I've read some news about some communities like Ruby on Rails or Nix that become overtaken by people and discussions of political nature rather than development, they can just be ignored I think.
> Google and Microsoft have already shut down several failed projects
Could you elaborate on this?
Sure: Google fired the Python language team in 2024 that contained a couple of the worst politicians who were later involved in slandering Tim Peters.
Before that, Google moved heavily from Python to Go.
Microsoft fired the "Faster CPython Team" this year.
It’s unlikely those layoffs are related to that, but rather the industry at large and end of zirp. Those type of folks are common in bigtech companies as well.
For example the dart/flutter team was decimated as well.
>you better believe that every single admissible type will eventually be fed to this function
That's your problem right there. Why are random callers sending whatever different input types to that function?
That said, there are a few existing ways to define that property as a type, why not a protocol type "Indexable"?
>why not a protocol type
it was a sin that python's type system was initially released as a nominal type system. they should have been the target from day one.
being unable to just say "this takes anything that you can call .hello() and .world() on" was ridiculous, as that was part of the ethos of the dynamically typed python ecosystem. typechecking was generally frowned upon, with the idea that you should accept anything that fit the shape the receiving code required. it allowed you to trivially create resource wrappers and change behaviors by providing alternate objects to existing mechanisms. if you wanted to provide a fake file that read from memory instead of an actual file, it was simple and correct.
the lack of protocols made hell of these patterns for years.
I disagree. I think, if the decision was made today, it probably would have ended up being structural, but the fact that it isn't enables (but doesn't necessarily force) Python to be more correct than if it weren't (whereas forced structural typing has a certain ceiling of correctness).
Really it enabled the Python type system to work as well as it does, as opposed to TypeScript, where soundness is completely thrown out except for some things such as enums
Nominal typing enables you to write `def ft_to_m(x: Feet) -> Meters: and be relatively confident that you're going to get Feet as input and Meters as output (and if not, the caller who ignored your type annotations is okay with the broken pieces).
The use for protocols in Python in general I've found in practice to be limited (the biggest usefulness of them come from the iterable types), when dealing with code that's in a transitional period, or for better type annotations on callables (for example kwargs, etc).
TypeScript sacrificed soundness to make it easier to gradually type old JS code and to allow specific common patterns. There is no ceiling for correctness of structural typing bar naming conflicts.
>The use for protocols in Python in general I've found in practice to be limited (the biggest usefulness of them come from the iterable types)
Most Python's dunder methods make it so you can make "behave alike" objects for all kinds of behaviors, not just iterables
AFAIK, Python is missing a fully-featured up to date centralized documentation on how to use type annotations.
The current docs are "Microsoft-like", they have everything, spread through different pages, in different hierarchies, some of them wrong, and with nothing telling you what else exists.
> That's your problem right there. Why are random callers sending whatever different input types to that function?
Because it’s nice to reuse code. I’m not sure why anyone would think this is a design issue, especially in a language like Python where structural subtyping (duck typing) is the norm. If I wanted inheritance soup, I’d write Java.
Ironically, that’s support for structural subtyping is why Protocols exist. It’s too bad they aren’t better and the primary way to type Python code. It’s also too bad that TypedDict actively fought duck typing for years.
Why can’t you re-use it with limited types? If the types are too numerous/hard to maintain it seems like the same would apply to the runtime code.
Because it’s nice to reuse code. It’s virtually never the case that a function being compatible with too many types is an issue. The issue is sometimes that it isn’t clear what types will be compatible with a function, and people make mistakes.
Python’s type system is overall pretty weak, but with any static language at least one of the issues is that the type system can’t express all useful and safe constructs. This leads to poor code reuse and lots of boilerplate.
>It’s virtually never the case that a function being compatible with too many types is an issue
This kind of accidental compatibility is a source of many hard bugs. Things appear to work perfectly, then at some point it does something subtly different, until it blows up a month later
if the piece of code in question is so type independent, then either it should be generic or it's doing too much
Yes. It's not the type system that's broken, it's the design. Fix the design, and the type system works for you, not against you.
> Why are random callers sending whatever different input types to that function?
Probably because the actual type it takes is well-understood (and maybe even documented in informal terms) by the people making and using it, but they just don’t understand how to express it in the Python type system.
> For example a common failure mode in my work’s codebase is that some function will take something that is indexable by ints. The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types! And you better believe that every single admissible type will eventually be fed to this function. Sometimes people will try to keep up, annotating it with a Union of the (growing) list of admissible types, but eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
You are looking for protocols. A bit futzy to write once but for a heavily trafficked function it's woth it.
If your JIT compiler doesn't work well with protocols... sounds like a JIT problem not a Python typing problem
In my experience, the right tooling makes Python typing a big win. Modern IDEs give comprehensive real-time feedback on type errors, which is a big productivity boost and helps catch subtle bugs early (still nowhere near Rust, but valuable nonetheless). Push it too far though, and you end up with monsters like Callable[[Callable[P, Awaitable[T]]], TaskFunction[P, T]]. The art is knowing when to sprinkle types just enough to add clarity without clutter.
When you hit types like that type aliases come to the rescue; a type alias combined with a good docstring where the alias is used goes a long way
On the far end of this debate you end up with types like _RelationshipJoinConditionArgument which I'd argue is almost more useless than no typing at all. Some people claim it makes their IDE work better, but I don't use an IDE and I don't like the idea of doing extra work to make the tool happy. The opposite should be true.
You can use a Protocol type for that, makes a lot mote sense than nominal typing for typing use case.
Exactly, sounds like misuse of unions.
Although Python type hints are not expressive enough.
No idea about Python type system, but doesn't it have anything like this?
It does!
You can specify a protocol like this:
(Edit: formatting)The syntax is definitely harder to grasp but if the mechanism is there, I guess the parent poster's concern can be solved like that.
Although I understant that it might have been just a simplified example. Usually the "Real World" can get very complex.
> The syntax is definitely harder to grasp
Yes it is. I believe the reason is that this is all valid python while typescript is not valid javascript. Also, python's type annotations are available at runtime (eg. for introspection) while typescript types aren't.
That said, typescript static type system is clearly both more ergonomic and more powerful than Python's.
I feel pretty similarly on this. Python’s bolted on type system is very poor at encoding safe invariants common in the language. It’s a straight jacketed, Java-style OOP type system that’s a poor fit for many common Python patterns.
I would love it if it were better designed. It’s a real downer that you can’t check lots of Pythonic, concise code using it.
It sounds like that function is rightfully eligible to be ignored or to use the Any designation. To me that's why the system is handy. For functions that have specific inputs and outputs, it helps developers keep things straight and document code.
For broad things, write Any or skip it.
from typing import Protocol, TypeVar
T_co = TypeVar("T_co", covariant=True)
class Indexable(Protocol[T_co]): def __getitem__(self, i: int) -> T_co: ...
def f(x: Indexable[str]) -> None: print(x[0])
I am failing to format it proprely here, but you get the idea.
Just fyi: https://news.ycombinator.com/formatdoc
> Text after a blank line that is indented by two or more spaces is reproduced verbatim. (This is intended for code.)
If you'd want monospace you should indent the snippet with two or more spaces:
I give Rust a lot of points for putting control over covariance into the language without making anyone remember which one is covariance and which one is contravariance.
One of the things that makes typing an existing codebase difficult in Python is dealing with variance issues. It turns out people get these wrong all over the place in Python and their code ends up working by accident.
Generally it’s not worth trying to fix this stuff. The type signature is hell to write and ends up being super complex if you get it to work at all. Write a cast or Any, document why it’s probably ok in a comment, and move on with your life. Pick your battles.
Kotlin uses "in" and "out": https://kotlinlang.org/docs/generics.html
Co- means with. Contra- means against. There are lots of words with these prefixes you could use to remember (cooperate, contradict, etc.).
There is also bunch of prepackaged types, such as collections.abc.Sequence that could be used in this case.
Sequence does not cut it, since the op mentioned int indexed dictionaries. But yeah.
> Sequence[SupportsFloat] | Mapping[int,SupportsFloat]
This is really just the same mistake as the original expanding union, but with overly narrow abstract types instead of overly narrow concrete types. If it relies on “we can use indexing with an int and get out something whose type we don’t care about”, then its a Protocol with the following method:
More generally, even if there is a specific output type when indexing, or the output type of indexing can vary but in a way that impacts the output or other input types of the function, it is a protocol with a type parameter T and this method: It doesn’t need to be union of all possible concrete and/or abstract types that happen to satisfy that protocol, because it can be expressed succinctly and accurately in a single Protocol.As of Python 3.12, you don’t need separately declared TypeVars with explicit variance specifications, you can use the improved generic type parameter syntax and variance inference.
So, just:
is enough.> eventually the list will become silly and the function will earn a # pyre-ignore annotation. This defeats the whole point of the pointless exercise.
No, this is the great thing about gradual typing! You can use it to catch errors and provide IDE assistance in the 90% of cases where things have well-defined types, and then turn it off in the remaining 10% where it gets in the way.
Define a protocol[0] that declares it implements `__getitem__` and type annotate with that protocol. Whatever properties are needed inside the function can be described in other protocols.
These are similar to interfaces in C# or traits in Rust - you describe what the parameter _does_ instead of what it _is_.
[0]: https://typing.python.org/en/latest/spec/protocol.html
>The type could be anything, it could be List, Tuple, Dict[int, Any], torch.Size, torch.Tensor, nn.Sequential, np.ndarray, or a huge host of custom types!
That's not how you are supposed to use static typing? Python has "protocols" that allows for structural type checking which is intended for this exact problem.
Sounds like the ecosystem needs an "indexable" type annotation. Make it an "indexable<int>" for good measure.
Right, this was my thought.
Can’t you just use a typing.Protocol on __getitem__ here?
https://typing.python.org/en/latest/spec/protocol.html
Something like
Though maybe numpy slicing needs a bit more work to supportIndeed.
IMO, the trick to really enjoying python typing is to understand it on its own terms and really get comfortable with generics and protocols.
That being said, especially for library developers, the not-yet-existant intersection type [1] can prove particularly frustrating. For example, a very frequent pattern for me is writing a decorator that adds an attribute to a function or class, and then returns the original function or class. This is impossible to type hint correctly, and as a result, anywhere I need to access the attribute I end up writing a separate "intersectable" class and writing either a typeguard or calling cast to temporarily transform the decorated object to the intersectable type.
Also, the second you start to try and implement a library that uses runtime types, you've come to the part of the map where someone should have written HERE BE DRAGONS in big scary letters. So there's that too.
So it's not without its rough edges, and protocols and overloads can be a bit verbose, but by and large once you really learn it and get used to it, I personally find that even just the value of the annotations as documentation is useful enough to justify the added work adding them.
[1] https://github.com/python/typing/issues/213
Slicing is totally hintable as well.
Change the declaration to:
def __getitem__(self, i: int | slice)
Though to be honest I am more concerned about that function that accepts a wild variety of objects that seem to be from different domains...
I'd guess inside the function is a HUGE ladder of 'if isinstance()' to handle the various types and special processing needed. Which is totally reeking of code smell.
You explained some hyper niche instance where type hints should be ignored. 99% of the time, they are extremely helpful.
It's not even a niche instance, protocols solve their problem lol.
the issue of having multiple inputs able to be indexable by ints, is exactly why i prefer that type hints remain exactly as "hints" and not as mandated checks. my philosophy for type hints is that they are meant to make codebases easier to understand without getting into a debugger. their functional equivalence should be that of comments. it's a cleaner more concise way of describing a variable instead of using a full on docstring.
though maybe there's a path forward to give a variable a sort of "de-hint" in that in can be everything BUT this type(i.e. an argument can be any indexable type, except a string)
>though maybe there's a path forward to give a variable a sort of "de-hint" in that in can be everything BUT this type
I think this is called a negation type, and it acts like a logical NOT operator. I'd like it too, and I hear that it works well with union types (logical OR) and intersection types (logical AND) for specifying types precisely in a readable way.
Can't you define your own hint for "type that has __getitem__ taking int"?
The way I understand parent is that such a type would be too broad.
The bigger problem is that the type system expressed through hints in Python is not the type system Python is actually using. It's not even an approximation. You can express in the hint type system things that are nonsense in Python and write Python that is nonsense in the type system implied by hints.
The type system introduced through typing package and the hints is a tribute to the stupid fashion. But, also, there is no syntax and no formal definitions to describe Python's actual type system. Nor do I think it's a very good system, not to the point that it would be useful to formalize and study.
In Russian, there's an expression "like a saddle on a cow", I'm not sure what the equivalent in English would be. This describes a situation where someone is desperately trying to add a desirable feature to an exiting product that ultimately is not compatible with such a feature. This, in my mind, is the best description of the relationship between Python's actual type system and the one from typing package.
> In Russian, there's an expression "like a saddle on a cow", I'm not sure what the equivalent in English would be
“To fit a square peg into a round hole”
Close but not the same. In Russian, the expression implies an "upgrade", a failed attempt at improving something that either doesn't require improvement or cannot be improved in this particular way. This would be a typical example of how it's used: "I'm going to be a welder, I need this bachelor's degree like a saddle on a cow!".
"Lipstick on a pig"? Although that's quite more combative than the Russian phrase.
Yeah... this seems like it would fit the bill nicely. At least, this is the way I'd translate it if I had to. Just didn't think about it.
I like your point! I think the advantage in its light is this: People often use Python because it's convention in the domain, the project already uses it, or it's the language the rest of the team uses. So, you are perhaps violating the spirit, but that's OK. You are making the most of tools available. It's not the Platonic (Pythonic??) ideal, but good enough.
Isn't this supported by typing.SupportsIndex? https://docs.python.org/3/library/typing.html#typing.Support...
Mind you, I haven't used it before, but it feels very similar to the abstract Mapping types.
__index__ is not what you think it is
Oops, thanks for the correction. That's on me for drive-by commenting.
I mean, you can just... Not annotate something if creating the relevant type is a pain. Static analysis \= type hints, and even then...
Besides, there must be some behavior you expect from this object. You could make a type that reflects this: IntIndexable or something, with an int index method and whatever else you need.
This feels like an extremely weak argument. Just think of it as self-enforcing documentation that also benefits auto-complete; what's not to love? Having an IntIndexable type seems like a great idea in your use case.
> The reason is that they are not really a part of the language, they violate the spirit of the language
This is a good way of expressing my own frustration with bolting strong typing on languages that were never designed to have it. I hate that TypeScript has won out over JavaScript because of this - it’s ugly, clumsy, and boilerplatey - and I’d be even more disappointed to see the same thing happen to the likes of Python and Ruby.
My background is in strongly typed languages - first C++, then Java, and C# - so I don’t hate them or anything, but nowadays I’ve come to prefer languages that are more sparing and expressive with their syntax.
> something that is indexable by ints. > ...Dict[int, Any]...
If that is exactly what you want, then define a Protocol: from __future__ import annotations from typing import Protocol, TypeVar
Then you can call "first" with a list or a tuple or a numpy array, but it will fail if you give it a dict. There is also collections.abc.Sequence, which is a type that has .__getitem__(int), .__getitem__(slice), .__len__ and is iterable. There are a couple of other useful ones in collections.abc as well, including Mapping (which you can use to do Mapping[int, t], which may be of interest to you), Reversible, Callable, Sized, and Iterable.why not a protocol with getitem with an int arg?
This is like saying you don’t like nails because you don’t understand how to use a hammer though. Developers are not understanding how to use the hints properly which is causing you a personal headache. The hints aren’t bad, the programmers are untrained - the acknowledgement of this is the first step into a saner world.
Why not do Indexable = Any and pass that? Even if it doesn't help your jit or the IDE, at least it is more explicit than
For this very specific example, isn't there something like "Indexable[int]"?
As a static typing advocate I do find it funny how all the popular dynamic languages have slowly become statically typed. After decades of people saying it's not at all necessary and being so critical of statically typed languages.
When I was working on a fairly large TypeScript project it became the norm for dependencies to have type definitions in a relatively short space of time.
People adapt to the circumstances. A lot of Python uses are no longer about fast iteration on the REPL. Instead of that we are shipping Python to execute in clusters on very long running jobs or inside servers. It's not only about having to start all over after hours, it's simply that concurrent and distributed execution environments are hostile to interactive programming. Now you can't afford to wait for an exception and launch the debugger in postmortem. Or even if you do it's not very useful.
And now my personal opinion: If we are going the static typing way I would prefer simply to use Scala or similar instead of Python with types. Unfortunately in the same way that high performance languages like C attracts premature optimizers static types attract premature "abstracters" (C++ both). I also think that dynamic languages have the largest libraries for technical merit reasons. Being more "fluid" make them easier to mix. In the long term the ecosystem converges organically on certain interfaces between libraries.
And so here we are with the half baked approach of gradual typing and #type: ignore everywhere.
Here we are because:
* Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
* Types are incredibly valuable on hardened production code.
* Most good production code started out spikey, experimental or as an MVP and transitioned.
And so here we are with gradual typing because "throwing away all the code and rewriting it to be "perfect" in another language" has been known for years to be a shitty way to build products.
Im mystified that more people here dont see that the value and cost of types is NOT binary ("they're good! theyre bad!") but exists on a continuum that is contingent on the status of the app and sometimes even the individual feature.
> Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
I find I’ve spent so much time writing with typed code that I now find it harder to write POC code in dynamic languages because I use types to help reason about how I want to architect something.
Eg “this function should calculate x and return”, well if you already know what you want the function to do then you know what types you want. And if you don’t know what types you want then you haven’t actually decided what that function should do ahead of building it.
Now you might say “the point of experimental code is to figure out what you want functions to do”. But even if you’re writing an MVP, you should know what that each function should do by the time you’ve finished writing it. Because if you don’t know who to build a function then how do you even know that the runtime will execute it correctly?
Python doesn’t have “no types,” in fact it is strict about types. You just don’t have to waste time reading and writing them early on.
While a boon during prototyping, a project may need more structural support as the design solidifies, it grows, or a varied, growing team takes responsibility.
At some point those factors dominate, to the extent “may need” support approaches “must have.”
My point is if you don’t know what types you need, then you can’t be trusted to write the function to begin with. So you don’t actually save that much time in the end. typing out type names simply isn’t the time consuming part of prototyping.
But when it comes to refactoring, having type safety makes it very easy to use static analysis (typically the compiler) check for type-related bugs during that refactor.
I’ve spent a fair amount of years in a great many different PL paradigms and I’ve honestly never found loosely typed languages any fast for prototyping.
That all said, I will say that a lot of this also comes down to what you’re used to. If you’re used to thinking about data structures then your mind will go straight there when prototyping. If you’re not used to strictly typed languages, then you’ll find it a distraction.
Right after hello world you need a list of arguments or a dictionary of numbers to names. Types.
Writing map = {}, is a few times faster than map: Dictionary[int, str] = {}. Now multiply by ten instances. Oh wait, I’m going to change that to a tuple of pairs instead.
It takes me about three times longer to write equivalent Rust than Python, and sometimes it’s worth it.
Rust is slower to prototype than Python because Rust is a low level language. Not because it’s strictly typed. So that’s not really a fair comparison. For example, assembly doesn’t have any types at all and yet is slower to prototype than Rust.
Let’s take Visual Basic 6, for example. That was very quick to prototype in even with “option explicit” (basically forcing type declarations) defined. Quicker, even, than Python.
Typescript isn’t any slower to prototype in than vanilla JavaScript (bar setting up the build pipeline — man does JavaScript ecosystem really suck at DevEx!).
Writing map = {} only saves you a few keystrokes. And Unless you’re typing really slowly with one finger like an 80 year old using a keyboard for the first time, you’ll find the real input bottleneck isn’t how quickly you can type your data structures into code, but how quickly your brain can turn a product spec / Jira ticket into a mental abstraction.
> Oh wait, I’m going to change that to a tuple of pairs instead
And that’s exactly when you want the static analysis of a strict type system to jump in and say “hang on mate, you’ve forgotten to change these references too” ;)
Having worked on various code bases across a variety of different languages, the refactors that always scare me the most isn’t the large code bases, it’s the ones in Python or JavaScript because I don’t have a robust type system providing me with compile-time safety.
There’s an old adage that goes something like this: “don’t put off to runtime what can be done in compile time.”
As computers have gotten exponentially faster, we’ve seemed to have forgotten this rule. And to our own detriment.
I've found the transition point where types are useful to start even within a few hundred lines of code, and I've found types are not that restrictive if at all, especially if the language started out typed. The rare case I need to discard types that is available usually, and a code smell your doing something wrong.
Even within a recent toy 1h python interview question having types would've saved me some issues and caught an error that wasn't obvious. Probably would've saved 10m in the interview.
Yep, depends on your memory context capacity.
For me I often don't feel any pain-points when working before about 1kloc (when doing JS), however if a project is above 500loc it's often a tad painful to resume it months later when I've started to forget why I used certain data-structures that aren't directly visible (adding types at that point is usually the best choice since it gives a refresher of the code at the same time as doing a soundness check).
The transition to where type hints become valuable or even necessary isnt about how many lines of code you have it is about how much you rely upon their correctness.
Type strictness also isnt binary. A program with lots of dicts that should be classes doesnt get much safer just because you wrote : dict[str, dict] everywhere.
> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
Press "X" to doubt. Types help _a_ _lot_ by providing autocomplete, inspections, and helping with finding errors while you're typing.
This significantly improves the iteration speed, as you don't need to run the code to detect that you mistyped a varible somewhere.
Pycharm, pyflakes, et all can do most of these without written types.
The more interesting questions, like “should I use itertools or collections?” Autocomplete can’t help with.
In some fields throwing away and rewriting is the standard, and it works, more or less. I'm thinking about scientific/engineering software: prototype in Python or Matlab and convert to C or C++ for performance/deployment constraints. It happens frequently with compilers too. I think migrating languages is actually more successful than writing second versions.
> * Types are expensive and dont tend to pay off on spikey/experimental/MVP code, most of which gets thrown away.
This is what people say, but I don't think it's correct. What is correct is that say, ten to twenty years ago, all the statically typed languages had other unacceptable drawbacks and "types bad" became a shorthand for these issues.
I'm talking about C (nonstarter for obvious reasons), C++ (a huge mess, footguns, very difficult, presumably requires a cmake guy), Java (very restrictive, slow iteration and startups, etc.). Compared to those just using Python sounds decent.
Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).
> Nowadays we have Go and Rust, both of which are pretty easy to iterate in (for different reasons).
It's common for Rust to become very difficult to iterate in.
https://news.ycombinator.com/item?id=40172033
I think Java was the main one. C/C++ are (relatively) close to the metal, system-level languages with explicit memory management - and were tacitly accepted to be the "complicated" ones, with dynamic typing not really applicable at that level.
But Java was the high-level, GCed, application development language - and more importantly, it was the one dominating many university CS studies as an education language before python took that role. (Yeah, I'm grossly oversimplifying - sincere apologies to the functional crowd! :) )
The height of the "static typing sucks!" craze was more like a "The Java type system sucks!" craze...
For me it was more the “java can’t easily process strings” craze that made it impractical to use for scripts or small to medium projects.
Not to mention boilerplate BS.
Recently, Java has improved a lot on these fronts. Too bad it’s twenty-five years late.
The issue with moving the ship where it's passanger wants it to be makes it more difficult for new passengers to get on.
This is clearly seen with typescript and the movement for "just use JS".
Furthermore, with LLMs, it should be easier than ever to experiment in one language and use another language for production loads.
I don't think types are expensive for MVP code unless they're highly complicated (but why would you do that?) Primitives and interfaces are super easy to type and worth the extra couple seconds.
Software quality only pays off on the long time. For the short time, garbage is quick and gets the job done.
Also, in my experience, the long time for software arrives in a couple of weeks.
PHP is a great example of the convergence of interfaces. Now they have different “PSR” standards for all sorts of things. There is one for HTTP clients, formatting, cache interfaces, etc. As long as your library implements the spec, it will work with everything else and then library authors are free to experiment on the implementation and contribute huge changes to the entire ecosystem when they find a performance breakthrough.
Types seem like a “feature” of mature software. You don’t need to use them all the time, but for the people stuck on legacy systems, having the type system as a tool in their belt can help to reduce business complexity and risk as the platform continues to age because tooling can be built to assert and test code with fewer external dependencies.
Python is ubiquitous in ML, often you have no choice but to use it
[dead]
> slowly become statically typed
They don't. They become gradually typed which is a thing of it's own.
You can keep the advantages of dynamic languages, the ease of prototyping but also lock down stuff when you need to.
It is not a perfect union, generally the trade-off is that you can either not achieve the same safety level as in a purely statically typed language because you need to provide same escape hatches or you need a extremely complex type system to catch the expressiveness of the dynamic side. Most of the time it is a mixture of both.
Still, I think this is the way to go. Not dynamic typing won or static typing won but both a useful and having a language support both is a huge productivity boost.
> how all the popular dynamic languages have slowly become statically typed
Count the amount of `Any` / `unknown` / `cast` / `var::type` in those codebases, and you'll notice that they aren't particularly statically typed.
The types in dynamic languages are useful for checking validity in majority of the cases, but can easily be circumvented when the types become too complicated.
It is somewhat surprising that dynamic languages didn't go the pylint way, i.e. checking the codebase by auto-determined types (determined based on actual usage).
Julia (by default) does the latter, and its terrible. It makes it a) slow, because you have to do nonlocal inference through entire programs, b) impossible to type check generic library code where you have no actual usage, c) very hard to test that some code works generically, as opposed to just with these concrete types, and finally d) break whenever you have an Any anywhere in the code so the chain of type information is broken.
In the discussion of static vs dynamic typing solutions like typescript or annotated python were not really considered.
IMHO the idea of a complex and inference heavy type system that is mostly useless at runtime and compilation but focused on essentially interactive linting is relatively recent and its popularity is due to typescript success
I think that static typing proponents were thinking of something more along the lines of Haskell/OCaml/Java rather than a type-erased system a language where [1,2] > 0 is true because it is converted to "NaN" > "0"
OTH I only came to realize that I actually like duck typing in some situations when I tried to add type hints to one of my Python projects (and then removed them again because the actually important types consisted almost entirely of sum types, and what's the point of static typing if anything is a variant anyway).
E.g. when Python is used as a 'scripting language' instead of a 'programming language' (like for writing small command line tools that mainly process text), static typing often just gets in the way. For bigger projects where static typing makes sense I would pick a different language. Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).
> Because tbh, even with type hints Python is a lousy programming language (but a fine scripting language).
I'd be interested in seeing you expand on this, explaining the ways you feel Python doesn't make the cut for programming language while doing so for scripting.
The reason I say this is because, intuitively, I've felt this way for quite some time but I am unable to properly articulate why, other than "I don't want all my type errors to show up at runtime only!"
Learn how to use the tools to prevent that last paragraph.
Note1: Type hints are hints for the reader. If you cleverly discovered that your function is handling any type of data, hint that!
Note2: From my experience, in Java, i have NEVER seen a function that consumes explicitely an Object. In Java, you always name things. Maybe with parametric polymorphism, to capture complex typing patterns.
Note 3: unfortunately, you cannot subclass String, to capture the semantic of its content.
> Java, i have NEVER seen a function that consumes explicitely an Object
So you did not see any Java code from before version 5 (in 2004) then, because the language did not have generics for the first several years it was popular. And of course many were stuck working with older versions of the language (or variants like mobile Java) without generics for many years after that.
Exactly, I have never seen such codes [*].
Probably because the adoption of the generics has been absolutely massive in the last 20 years. And I expect the same thing to eventually happen with Typescript and [typed] Python.
[*]: nor have I seen EJB1 or even EJB2. Spring just stormed them, in the last 20 years.
An example of a function in Java that consumes a parameter of type Object is System.out.println(Object o)
Many such cases.
Sounds to be more of a symptom of the types of programs and functions you have written, rather than something inherent about types or Python. I've never encountered the type of gerry-mangled scenario you have described no matter how throwaway the code is.
If you like dynamic types have you considered using protocols? They are used precisely to type duck typed code.
AI tab-complete & fast LSP implementations made typing easy. The tools changed, and people changed their minds.
JSON's interaction with types is still annoying. A deserialized JSON could be any type. I wish there was a standard python library that deserialized all JSON into dicts, with opinionated coercing of the other types. Yes, a custom normalizer is 10 lines of code. But, custom implementations run into the '15 competing standards' problem.
Actually, there should be a popular type-coercion library that deals with a bunch of these annoying scenarios. I'd evangelize it.
> all the popular dynamic languages have slowly become statically typed
I’ve heard this before, but it’s not really true. Yes, maybe the majority of JavaScript code is now statically-typed, via Typescript. Some percentage of Python code is (I don’t know the numbers). But that’s about it.
Very few people are using static typing in Ruby, Lua, Clojure, Julia, etc.
Types become very useful when the code base reaches a certain level of sophistication and complexity. It makes sense that for a little script they provide little benefit but once you are working on a code base with 5+ engineers and no longer understand every part of it having some more strict guarantees and interfaces defined is very very helpful. Both for communicating to other devs as well as to simply eradicate a good chunk of possible errors that happen when interfaces are not clear.
How many people are using Ruby, Lua, Clojure, Julia, etc.?
Fair enough, apart from Ruby they’re all pretty niche.
OTOH I’m not arguing that most code should be dynamically-typed. Far from it. But I do think dynamic typing has its place and shouldn’t be rejected entirely.
Also, I would have preferred it if Python had concentrated on being the best language in that space, rather than trying to become a jack-of-all-trades.
I have my doubts about majority of JavaScript being TypeScript.
You’re probably right. RedMonk [0] shows JavaScript and TypeScript separately and has the former well above the latter.
[0] https://redmonk.com/sogrady/2025/06/18/language-rankings-1-2...
Even if they're not written as TypeScript, there are usually add on definitions like "@types/prettier" and the like.
I disagree for Julia, but that probably depends on the definition of static typing.
For the average Julia package I would guess, that most types are statically known at compile time, because dynamic dispatch is detrimental for performance. I consider, that to be the definition of static typing.
That said, Julia functions seldomly use concrete types and are generic by default. So the function signatures often look similar to untyped Python, but in my opinion this is something entirely different.
At least in ruby theres mayor code bases using stripes sorbet and the official RBS standard for type hints. Notably its big code bases with large amounts of developers, fitting in with the trend most people in this discussion point to.
My last job was working at a company that is notorious for Ruby and even though I was mostly distant from it, there seemed to be a big appetite for Sorbet there.
The big difference between static typing in Python and Ruby is that Guido et al have embraced type hints, whereas Matz considers them to be (the Ruby equivalent of) “unpythonic”. Most of each language’s community follows their (ex-)BDFL’s lead.
PHP as well has become statically typed.
All the languages you name are niche languages compared to Python, JS (/ TS) and PHP. Whether you like it or not.
I think you're ignoring how for some of us, gradual typing, is a far better experience than languages with static types.
For example what I like about PHPStan (tacked on static analysis through comments), that it offers so much flexibility when defining type constraints. Can even specify the literal values a function accepts besides the base type. And subtyping of nested array structures (basically support for comfortably typing out the nested structure of a json the moment I decode it).
Not ignoring, I just didn't write an essay. In all that time working with TypeScript there was very little that I found to be gradually typed, it was either nothing or everything, hence my original comment. Sure some things might throw in a bunch of any/unknown types but those were very much the rarity and often some libraries were using incredibly complicated type definitions to make them as tight as possible.
Worked with python, typescript and now php, seems that phpstan allows this gradual typing, while typescript kinda forces you to start with strict in serious projects.
Coming from Java extreme verbosity, I just loved the freedom of python 20 years ago. Working with complex structures with mixed types was a breeze.
Yes, it was your responsibility to keep track of correctness, but that also taught me to write better code, and better tests.
Writing tests is harder work than writing the equvalent number of type hints though
Type hints and/or stronger typing in other languages are not good substitutes for testing. I sometimes worry that teams with strong preferences for strong typing have a false sense of security.
People write tests in statically typed languages too, it's just that there's a whole class of bugs that you don't have to test for.
Hints are not sufficient, you’ll need tests anyway. They somewhat overlap.
Writing and maintaining tests that just do type checking is madness.
Dynamic typing also gives tooling such as LSPs and linters a hard time figuring out completions/references lookup etc. Can't imagine how people work on moderate to big projects without type hints.
Type hints / gradual typing is crucially different from full static typing though.
It’s valid to say “you don’t need types for a script” and “you want types for a multi-million LOC codebase”.
Static typing used to be too rigid and annoying to the point of being counterproductive. After decades of improvement of parsers and IDEs they finally became usable for rapid development.
Everything goes in cycles. It has happened before and it will happen again. The softward industry is incredibly bad at remembering lessons once learned.
That's because many do small things that don't really need it, sure there are some people doing larger stuff and are happy to be the sole maintainer of a codebase or replace the language types with unit-test type checks.
And I think they can be correct for rejecting it, banging out a small useful project (preferably below 1000 loc) flows much faster if you just build code doing things rather than start annotating (that quickly can be come a mind-sinkhole of naming decisions that interrupts a building flow).
However, even less complex 500 loc+ programs without typing can become a pita to read after the fact and approaching 1kloc it can become a major headache to pick up again.
Basically, can't beat speed of going nude, but size+complexity is always an exponential factor in how hard continuing and/or resuming a project is.
Thing is, famous dynamic languages of the past, Common Lisp, BASIC, Clipper, FoxPro, all got type hints for a reason, then came a new generation of scripting languages made application languages, and everyone had to relearn why the fence was in the middle of the field.
I think both found middle ground. In Java you don’t need to define the type of variables within the method. In Python people have learned types in method arguments is a good thing.
> After decades of people saying
You have to admit that the size and complexity of the software we write has increased dramatically over the last few "decades". Looking back at MVC "web applications" I've created in the early 2000s, and comparing them to the giant workflows we deal with today... it's not hard to imagine how dynamic typing was/is ok to get started, but when things exceed one's "context", you type hints help.
I like static types but advocating for enforcing them in any situation is different. Adding them when you need (Python currently) seems a better strategy than forcing you to set them always (Typescript is in between as many times it can determine them).
Many years ago I felt Java typing could be overkill (some types could have been deduced from context and they were too long to write) so probably more an issue about the maturity of the tooling than anything else.
What I would need is a statically typed language that has first class primitives for working with untyped data ergonomically.
I do want to be able to write a dynamically typed function or subsystem during the development phase, and „harden” with types once I’m sure I got the structure down.
But the dynamic system should fit well into the language, and I should be able to easily and safely deal with untyped values and convert them to typed ones.
So… Typescript?
Yes, the sad part is that some people experienced early TypeScript that for some reason had the idea of forcing "class" constructs into a language where most people wasn't using or needing them (and still aren't).
Sometimes at about TypeScript 2.9 finally started adding constructs that made gradual typing of real-world JS code sane, but by then there was a stubborn perception of it being bad/bloated/Java-ish,etc despite maturing into something fairly great.
The need for typing changed, when the way the language is used changed.
When JavaScript programs were a few hundred lines to add interactivity to some website type annotationd were pretty useless. Now the typical JavaScript project is far larger and far more complex. The same goes for python.
Trends change. There is still no hard evidence that static types are net positive outside of performance.
dynamically-typed languages were typically created for scripting tasks - but ended up going viral (in part due to d-typing), the community stretched the language to its limits and pushed it into markets it wasn't designed/thought for (embedded python, server-side js, distributed teams, dev outsourcing etc).
personally i like the dev-sidecar approach to typing that Python and JS (via TS) have taken to mitigate the issue.
Javascript is no longer was just scripting. Very large and complex billion dollar apps were being written in pure Javascript. It grew up.
I guess Python is next.
Next stop is to agree that JSON is really NOT the semantic data exchange serialization for this "properly typed" world.
Then what is?
Everybody knows the limitations of JSON. Don't state the obvious problem without stating a proposed solution.
The RDF structure is a graph of typed instances of typed objects, serializable as text.
Exchanging RDF, more precisely its [more readable] "RDF/turtle" variant, is probably what will eventually come to the market somehow.
Each object of a RDF structure has a global unique identifier, is typed, maintains typed links with other objects, have typed values.
For an example of RDF being exchanged between a server and a client, you can test
https://search.datao.net/beta/?q=barack%20obama
Open your javascript console, and hover the results on the left hand side of the page with your mouse. The console will display which RDF message triggered the viz in the center of the page.
Update: you may want to FIRST select the facet "DBPedia" at the top of the page, for more meaningful messages exchanged.
Update2: the console does not do syntax higlighting, so here is the highlighted RDF https://datao.net/ttl.jpg linked to the 1st item of " https://search.datao.net/beta/?q=films%20about%20barack%20ob... "
That's a circular argument. What serialization format would you recommend? JSON?
Turtle directly.
JSON forces you to fit your graph of data into a tree structure, that is poorly capturing the cardinalities of the original graph.
Plus of course, the concept of object type is not existing in JSON.
Thank you, I did not realize that RDF has its own serialization format. I'm reading about it now.
Huh. It's almost like these people didn't know what they were talking about. How strange.
I think that the practically available type checkers evolved to a point where many of the common idioms can be expressed with little effort.
If one thinks back to some of the early statically typed languages, you'd have a huge rift: You either have this entirely weird world of Caml and Haskell (which can express most of what python type hints have, and could since many years), and something like C, in which types are merely some compiler hints tbh. Early Java may have been a slight improvement, but eh.
Now, especially with decent union types, you can express a lot of idioms of dynamic code easily. So it's a fairly painless way to get type completion in an editor, so one does that.
Well, we do coalesce on certain things... some static type languages are dropping type requirements (Java and `var` in certain places) :D
There's no dropping of type requirements in Java, `var` only saves typing.
When you use `var`, everything is as statically typed as before, you just don't need to spell out the type when the compiler can infer it. So you can't (for example) say `var x = null` because `null` doesn't provide enough type information for the compiler to infer what's the type of `x`.
> `var` only saves typing.
this is a lovely double entendre
var does absolutely nothing to make Java a less strictly typed language. There is absolutely no dropping of the requirement that each variable has a type which is known at compile time.
Automatic type inference and dynamic typing are totally different things.
I have not written a line of Java in at least a decade, but does Java not have any 'true' dynamic typing like C# does? Truth be told, the 'dynamic' keyword in C# should only be used in the most niché of circumstances. Typically, only practitioners of Dark Magic use the dynamic type. For the untrained, it often leads one down the path of hatred, guilt, and shame. For example:
dynamic x = "Forces of Darkness, grant me power";
Console.WriteLine(x.Length); // Dark forces flow through the CLR
x = 5;
Console.WriteLine(x.Length); // Runtime error: CLR consumed by darkness.
C# also has the statically typed 'object' type which all types inherit from, but that is not technically a true instance of dynamic typing.
Same nonsense repeated over and over again... There aren't dynamic languages. It's not a thing. The static types aren't what you think they are... You just don't know what you are saying and your conclusion is just a word salad.
What happened to Python is that it used to be a "cool" language, whose community liked to make fun of Java for their obsession with red-taping, which included the love for specifying unnecessary restrictions everywhere. Well, just like you'd expect from a poorly functioning government office.
But then everyone wanted to be cool, and Python was adopted by the programming analogue of the government bureaucrats: large corporations which treat programming as a bureaucratic mill. They don't want fun or creativity or one-of bespoke solutions. They want an industrial process that works on as large a scale as possible, to employ thousands of worst quality programmers, but still reliably produce slop.
And incrementally, Python was made into Java. Because, really, Java is great for producing slop on an industrial scale. But the "cool" factor was important to attract talent because there used to be a shortage, so, now you have Python that was remade to be a Java. People who didn't enjoy Java left Python over a decade ago. So that Python today has nothing in common with what it was when it was "cool". It's still a worse Java than Java, but people don't like to admit defeat, and... well, there's also the sunk cost fallacy: so much effort was already spent at making Python into a Java, that it seems like a good idea to waste even more effort to try to make it a better Java.
Yeah, this is the lens through which I view it. It's a sort of colonization that happens, when corporations realize a language is fit for plunder. They start funding it, then they want their people on the standards boards, then suddenly the direction of the language is matched very nicely to their product roadmap. Meanwhile, all the people who used to make the language what it was are bought or pushed out, and the community becomes something else entirely.
I love typing in Python. I learnt programming with C++ and OOPs. It was freeing when I took up Python to note care about types, but I have come to enjoy types as I got older.
But, boy have we gone overboard with this now? The modern libraries seem to be creating types for the sake of them. I am drowning in nested types that seem to never reach native types. The pain is code examples of the libraries don’t even show them.
Like copy paste an OpenAI example and see if LSP is happy for example. Now I have gotten in this situation where I am mentally avoiding type errors of some libraries and edging into wishing Pydantic et al never happened.
My love for python was critically hurt when I learned about typing.TYPE_CHECKING.
For those unaware, due to the dynamic nature of Python, you declare a variable type like this
This might look like Typescript, but it isn't because "Type" is actually an object. In python classes and functions are first-class objects that you can pass around and assign to variables.The obvious problem of this is that you can only use as a type an object that in "normal python" would be available in the scope of that line, which means that you can't do this:
Because "Bar" is defined AFTER foo() it isn't in the scope when foo() is declared. To get around this you use this weird string-like syntax: This already looks ugly enough that should make Pythonists ask "Python... what are you doing?" but it gets worse.If you have a cyclic reference between two files, something that works out of the box in statically typed languages like Java, and that works in Python when you aren't using type hints because every object is the same "type" until it quacks like a duck, that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints. To get around this, the solution is to use this monstrosity:
And that's code that only "runs" when the static type check is statically checking the types.Nobody wants Python 4 but this was such an incredibly convoluted way to add this feature, specially when you consider that it means every module now "over-imports" just to add type hints that they previously didn't have to.
Every time I see it makes me think that if type checks are so important maybe we shouldn't be programming Python to begin with.
There's actually another issue with ForwardRefs. They don't work in the REPL. So this will work when run as a module:
But will throw an error if copy pasted into a REPL.However, all of these issues should be fixed in 3.14 with PEP649 and PEP749:
> At compile time, if the definition of an object includes annotations, the Python compiler will write the expressions computing the annotations into its own function. When run, the function will return the annotations dict. The Python compiler then stores a reference to this function in __annotate__ on the object.
> This mechanism delays the evaluation of annotations expressions until the annotations are examined, which solves many circular reference problems.
It doesn't throw error in the REPL though. Surely you meant to share some other example?
Please ignore my first assertion that the behavior between REPL and module is different.
This would have been the case if the semantics of the original PEP649 spec had been implemented. But instead, PEP749 ensures that it is not [0]. My bad.
[0] https://peps.python.org/pep-0749/#behavior-of-the-repl
> that isn't going to work if you try to use type hints in python because you're going to end up with a cyclic import. More specifically, you don't need cyclic imports in Python normally because you don't need the types, but you HAVE to import the types to add type hints, which introduces cyclic imports JUST to add type hints.
Yes, `typing.TYPE_CHECKING` is there so that you can conditionally avoid imports that are only needed for type annotations. And yes, importing modules can have side effects and performance implications. And yes, I agree it's ugly as sin.
But Python does in fact allow for cyclic imports — as long as you're importing the modules themselves, rather than importing names `from` those modules. (By the way, the syntax is the other way around: `from ... import ...`.)
https://stackoverflow.com/questions/744373/
This is trivial to solve by simply not having circular imports. Place the types in one file and the usage of it in others.
This has many benefits, like forcing you to think about the dependencies and layers of your architecture. Here is a good read about why, from F# that has the same limitation https://fsharpforfunandprofit.com/posts/cyclic-dependencies/
As others already mentioned, importing __annotations__ also works.
If the type is a class with methods, then this method doesn't work, though adding intermediate interface classes (possibly with Generic types) might help in most cases. Python static type system isn't quite the same level as F#.
> Well, these complaints are unfounded.
"You're holding it wrong." I've also coded quite a bit of OCaml and it had the same limitation (which is where F# picked it up in the first place), and while the issue can be worked around, it still seemed to creep up at times. Rust, also with some virtual OCaml ancestry, went completely the opposite way.
My view is that while in principle it's a nice property that you can read and and understand a piece of code by starting from the top and going to the bottom (and a REPL is going to do exactly that), in practice it's not the ultimate nice property to uphold.
> If the type is a class with methods, then this method doesn't work
Use typing.Self
I meant if you have two classes that need to refer to each other. But good pointer anyway, I hadn't noticed it, thanks!
I ran into some code recently where this pattern caused me so much headache - class A has an attribute which is an instance of class B, and class B has a "parent" attribute (which points to the instance of class A that class B is an attribute of):
Obviously both called into each other to do $THINGS... Pure madness.So my suggestion: Try not to have interdependent classes :D
Well, at times having a parent pointer is rather useful! E.g. a callback registration will be able to unregister itself from everywhere where it has been registered to, upon request. (One would want to use weak references in this case.)
Fair point!
Maybe I am just a bit burned by this particular example I ran into (where this pattern should IMO not have been used).
> If you have a cyclic reference between two files,
Don't have cyclic references between two files.
It makes testing very difficult, because in order to test something in one file, you need to import the other one, even though it has nothing to do with the test.
It makes the code more difficult to read, because you're importing these two files in places where you only need one of them, and it's not immediately clear why you're importing the second one. And it's not very satisfying to learn that you you're importing the second one not because you "need" it but because the circular import forces you to do so.
Every single time you have cyclic references, what you really have are two pieces of code that rely on a third piece of code, so take that third piece, separate it out, and have the first two pieces of code depend on the third piece.
Now things can be tested, imports can be made sanely, and life is much better.
Using the typical "Rust-killer" example: if you have a linked list where the List in list.py returns a Node type and Node in node.py takes a List in its constructor, you already have a cyclic reference.
Agreed that this "hack" is very ugly!
On the other hand, I tend to take it as a hint that I should look at my module structure, and see if I can avoid the cyclic import (even if before adding type hints there was no error, there still already was a "semantic dependency"...)
You're actually missing the benefit of this. It's actually a feature.
With python, because types are part of python itself, they can thus be programmable. You can create a function that takes in a typehint and returns a new typehint. This is legal python. For example below I create a function that dynamically returns a type that restricts a Dictionary to have a specific key and value.
With this power in theory you can create programs where types essentially can "prove" your program correct, and in theory eliminate unit tests. Languages like idris specialize in this. But it's not just rare/specialized languages that do this. Typescript, believe it or not, has programmable types that are so powerful that writing functions that return types like the one above are Actually VERY common place. I was a bit late to the game to typescript but I was shocked to see that it was taking cutting edge stuff from the typing world and making it popular among users.In practice, using types to prove programs to be valid in place of testing is actually a bit too tedious compared with tests so people don't go overboard with it. It is a much more safer route then testing, but much harder. Additionally as of now, the thing with python is that it really depends on how powerful the typechecker is on whether or not it can enforce and execute type level functions. It's certainly possible, it's just nobody has done it yet.
I'd go further than this actually. Python is actually a potentially more powerfully typed language than TS. In TS, types are basically another language tacked onto javascript. Both languages are totally different and the typing language is very very limited.
The thing with python is that the types and the language ARE the SAME thing. They live in the same universe. You complained about this, but there's a lot of power in that because basically types become turing complete and you can create a type that does anything including proving your whole program correct.
Like I said that power depends on the typechecker. Someone needs to create a typechecker that can recognize type level functions and so far it hasn't happened yet. But if you want to play with a language that does this, I believe that language is Idris.
That's not a benefit. That's a monstrosity.
And, as you heavily imply in your post, type checkers won't be able to cope with it, eliminating one if the main benefits of type hints. Neither will IDEs / language servers, eliminating the other main benefit.
>And, as you heavily imply in your post, type checkers won't be able to cope with it
I implied no such thing. literally said there's a language that already does this. Typescript. IDE's cope with it just fine.
>That's not a benefit. That's a monstrosity.
So typescript is a monstrosity? Is that why most of the world who uses JS in node or the frontend has moved to TS? Think about it.
The syntax is a monstrosity. You can also extract a proven OCaml program from Coq and Coq has a beautiful syntax.
If you insist on the same language for specifying types, some Lisp variants do that with a much nicer syntax.
Python people have been indoctrinated since ctypes that a monstrous type syntax is normal and they reject anything else. In fact Python type hints are basically stuck on the ctypes level syntax wise.
That's just a sugar thing. Yeah it can get a bit more verbose.
I don't believe Typescript (nor Idris) type systems work like you describe, though? Types aren't programmable with code like that (in the same universe, as you say) and TS is structurally typed, with type erasure (ie types are not available at runtime).
I am not that deeply familiar with Python typings development but it sounds fundamentally different to the languages you compare to.
Typescript types (and Idris) are Turing complete. You can actually get typescript types to run doom.
https://www.youtube.com/watch?v=0mCsluv5FXA&t
Idris on the other hand is SPECIFICALLY designed so types and the program live in the same language. See the documentation intro: https://www.idris-lang.org/pages/example.html
THe powerful thing about these languages is that they can prove your program correct. For testing you can never verify your program to be correct.
Testing is a statistical sampling technique. To verify a program as correct via tests you have to test every possible input and output combination of your program, which is impractical. So instead people write tests for a subset of the possibilities which ONLY verifies the program as correct for that subset. Think about it. If you have a function:
How would you verify this program is 100% correct? You have to test every possible combination of x, y and add(x, y). But instead you test like 3 or 4 possibilities in your unit tests and this helps with the overall safety of the program because of statistical sampling. If a small sample of the logic is correct, it says something about the entire population of the logic..Types on the other hand prove your program correct.
If the above is type checked, your program is proven correct for ALL possible types. If those types are made more advanced via being programmable, then it becomes possible for type checking to prove your ENTIRE program correct.Imagine:
With a type checker that can analyze the above you can create a add function that at most can take an int that is < 4 and return an int that is < 8. Thereby verifying even more correctness of your addition function.Python on the other hand doesn't really have type checking. It has type hints. Those type hints can de defined in the same language space as python. So a type checker must read python to a limited extent in order to get the types. Python at the same time can also read those same types. It's just that python doesn't do any type checking with the types while the type checker doesn't do anything with the python code other than typecheck it.
Right now though, for most typecheckers, if you create a function in python that returns a typehint, the typechecker is not powerful enough to execute that function to find the final type. But this can certainly be done if there was a will because Idris has already done this.
Are there really productive projects which rely on types as a proofing system? I've always thought it added too much complexity to the code, but I'd love to see it working well somewhere. I love the idea of correctness by design.
No too my knowledge nothing is strict about a proofing system because like I said it becomes hard to do. It could be useful for ultra safe software but for most cases the complexity isn't worth it.
But that doesn't mean it's not useful to have this capability as part of your typesystem. It just doesn't need to be fully utilized.
You don't need to program a type that proves everything correct. You can program and make sure aspects of the program are MORE correct than just plain old types. typescript is a language that does this and it is very common to find types in typescript that are more "proofy" than regular types in other languages.
See here: https://www.hacklewayne.com/dependent-types-in-typescript-se...
Typescript does this. Above there's a type that's only a couple of lines long that proves a string reversal function reverses a string. I think even going that deep is overkill but you can define things like Objects that must contain a key of a specific string where the value is either a string or a number. And then you can create a function that dynamically specifies the value of the key in TS.
I think TS is a good example of a language that practically uses proof based types. The syntax is terrible enough that it prevents people from going overboard with it and the result is the most practical application of proof based typing that I seen. What typescript tells us that proof based typing need only be sprinkled throughout your code, it shouldn't take it all over.
That's horrible. Nobody needs imperative metaprogramming for type hints. In fact, it would be absolute insanity for a typechecker to check this because it would mean opening a file in VS code = executing arbitrary python code. What stops me from deleting $HOME inside make_typed_dict?
TypeScript solves this with its own syntax that never gets executed by an interpreter because types are striped when TS is compiled to JS.
>VS code = executing arbitrary python code. What stops me from deleting $HOME inside make_typed_dict?
Easy make IO calls illegal in the type checker. The type checker of course needs to execute code in a sandbox. It won't be the full python language. Idris ALREADY does this.
Isn’t this solved in 3.14/PEP-649?
I want to say it (or something similar at least) was originally addressed by from __future__ import annotations back in Python 3.7/3.8 or thereabouts? I definitely remember having to use stringified types a while back but I haven't needed to for quite a while now.
Yes, annotations allows you to use the declared types as they are, no strings.
It turns them into thunks (formerly strings) automatically, an important detail if you're inspecting annotations at run time because the performance hit of resolving the actual type can be significant.
TIL, thanks! It looks like 3.14 is also changing it so that all evaluations are lazy.
At last, Pi-thon.
from __future__ import annotations
> But, boy have we gone overboard with this now? The modern libraries seem to be creating types for the sake of them. I am drowning in nested types that seem to never reach native types.
Thought you were talking about TypeScript for a moment there.
Except that typescript structural typing and features make it much easier to swim.
Also python is far less aggressive with lint warnings so it is much easier to make mistakes
I learned C++ before learning python as well and python felt like a breath of fresh air.
At first I thought it was because of the lack of types. But in actuality the lack of types was a detriment for python. It was an illusion. The reason why python felt so much better was because it had clear error messages and a clear path to find errors and bugs.
In C++ memory leaks and seg faults are always hidden from view so EVEN though C++ is statically typed, it's actually practically less safe then python and much more harder to debug.
The whole python and ruby thing exploding in popularity back in the day was a trick. It was an illusion. We didn't like it more because of the lack of typing. These languages were embraced because they weren't C or C++.
It took a decade for people to realize this with type hints and typescript. This was a huge technical debate and now all those people were against types are proven utterly wrong.
> It was an illusion. We didn't like it more because of the lack of typing. These languages were embraced because they weren't C or C++.
It's an illusion only you once had. Java (a language that is not C or C++) got mainstream way before Python.
Java on the other hand had the most verbose syntax known to man, especially those early versions of it. Nowadays it’s getting more tolerable.
I don't understand, the parent says that not being C/C++ was a strong point and you give an counter example of a successful language that is not C/C++
modern C++ is great, to be honest.
The thing that finally got me on board with optional type hints in Python was realizing that they're mainly valuable as documentation.
But it's really valuable documentation! Knowing what types are expected and returned just by looking at a function signature is super useful.
The best kind of documentation is the kind you can trust is accurate. Type defs wouldn't be close to as useful if you didn't really trust them. Similarly, doctests are some of the most useful documentation because you can be sure they are accurate.
The best docs are the ones you can trust are accurate. The second best docs are ones that you can programmatically validate. The worst docs are the ones that can’t be validated without lots of specialized effort.
Python’s type hints are in the second category.
I’d almost switch the order here! In a world with agentic coding agents that can constantly check for type errors from the language server powering the errors/warnings in your IDE, and reconcile them against prose in docstrings… types you can programmatically validate are incredibly valuable.
Do you have an example of the first?
When I wrote that, I was thinking about typed, compiled languages' documentation generated by the compiler at build time. Assuming that version drift ("D'oh, I was reading the docs for v1.2.3 but running v4.5.6") is user error and not a docs-trustworthiness issue, that'd qualify.
But now that I'm coming back to it, I think that this might be a larger category than I first envisioned, including projects whose build/release processes very reliably include the generation+validation+publication of updated docs. That doesn't imply a specific language or release automation, just a strong track record of doc-accuracy linked to releases.
In other words, if a user can validate/regenerate the docs for a project, that gets it 9/10 points. The remaining point is the squishier "the first party docs are always available and well-validated for accuracy" stuff.
Languages with strong static type systems
Is there a mainstream language where you can’t arbitrarily cast a variable to any other type?
This resonates with me this so much. I feel like half the comments in this thread are missing the value typing, but maybe they've never had the misfortune of working with hundreds of other developers on a project with no defined contracts on aggregates / value objects outside of code comments and wishful thinking.
I've worked on large python codebases for large companies for the last ~6 years of my career; types have been the single biggest boon to developer productivity and error reduction on these codebases.
Just having to THINK about types eliminates so many opportunities for errors, and if your type is too complex to express it's _usually_ a code smell; most often these situations can be re-written in a more sane albeit slightly verbose fashion, rather than using the more "custom" typing features.
No one gets points for writing "magical" code in large organizations, and typing makes sure of this. There's absolutely nothing wrong with writing "boring" python.
Could we have accomplished this by simply having used a different language from the beginning? Absolutely, but often times that's not a option for a company with a mature stack.
TL;DR -- Typing in python is an exception tool to scale your engineering organization on a code-base.
Decent argument in principle. It still sucks for non-obvious types though:
https://old.reddit.com/r/Python/comments/10zdidm/why_type_hi...
Edit: Yes, one can sometimes go with Any, depending on the linter setup, but that's missing the point, isn't it?
The correct response to this is to figure what is the use case for your function: IE: add two numbers. Set the input and output as decimal and call it a day
Sure. Let me just quickly refactor Pytorch.
Actually, it's not missing the point. Sometimes you really do want duck typing, in which case you allow Any. It's not all-or-nothing.
What the reddit post is demonstrating is that the Python type system is still too naive in many respects (and that there are implementation divergences in behavior). In other languages, this is a solved problem - and very ergonomic and safe.
As the top comment says, if you don't know or want to define the type just use Any. That's what it's there for.
That entire Reddit post is a clueless expert beginner rant about something they don't really understand, unfortunate that it's survived as long as it has or that anyone is taking it as any sort of authoritative argument just because it's long.
> if you don't know or want to define the type
That's not the issue the reddit post is raising. The reddit post is pointing out that what a "type" is is not as simple as it looks. Particularly in a language like Python where user-defined types proliferate, and can add dunder methods that affect statements that involve built-in operations. "Just use Any" doesn't solve any of those problems.
> just use Any.
All the above said: not putting a type in at all is even easier than using Any, and is semantically equivalent.
The Reddit post falls under the case of "don't know" the type. If you want to allow users to pass in any objects, try to add and fail at runtime... that's exactly what Any is for.
But the entire post is built upon the premise that accepting all types is good API design. Which it isn't, at all.
> The Reddit post falls under the case of "don't know" the type.
No, it doesn't. The desired type is known; it's "Addable" (i.e., "doesn't throw an exception when the built-in add operator is used"). The problem is expressing that in Python's type notation in a way that catches all edge cases.
> If you want to allow users to pass in any objects, try to add and fail at runtime
Which is not what the post author wants to do. They want to find a way to use Python's type notation to catch those errors with the type checker, so they don't happen at runtime.
> the entire post is built upon the premise that accepting all types is good API design
It is based on no such thing. I don't know where you're getting that from.
> The desired type is known; it's "Addable" (i.e., "doesn't throw an exception when the built-in add operator is used").
The mistake both you and the reddit posts' author make is treating the `+` operator the same as you would an interface method. Despite Python having __add__/__radd__ methods, this isn't true, nor is it true in many other programming languages. For example, Go doesn't have a way to express "can use the + operator" at all, and "can use comparison operators" is defined as an explicit union between built-in types.[0] In C# you could only do this as of .NET 7, which was released in Nov 2022[1] -- was the C# type system unusable for the 17 years prior, when it didn't support this scenario?
If this were any operation on `a` and `b` other than a built-in operator, such as `a.foo(b)`, it would be trivial to define a Protocol (which the author does in Step 4) and have everything work as expected. It's only because of misunderstanding of basic Python that the author continues to struggle for another 1000 words before concluding that type checking is bad. It's an extremely cherry-picked and unrealistic scenario either from someone who is clueless, or knows what they're doing and is intentionally being malicious in order to engagement bait.[2]
This isn't to say Python (or Go, or C#) has the best type system, and it certainly lacks compared to Rust which is a very valid complaint, but "I can't express 'type which supports the '+' operator'" is an insanely esoteric and unusual case, unsupported in many languages, that it's disingenuous to use it as an excuse for why people shouldn't bother with type hinting at all.
[0] https://pkg.go.dev/cmp#Ordered
[1] https://learn.microsoft.com/en-us/dotnet/standard/generics/m...
[2] actually reading through the reddit comments, the author specifically says they were engagement baiting so... I guess they had enough Python knowledge to trick people into thinking type hinting was bad, fair enough!
> treating the `+` operator the same as you would an interface method
In other words, you agree that the Python type hint system does not give you a good, built-in way to express the "Addable" type.
Which means you are contradicting your claims that the type the article wants to express is "unknown" and that the article is advocating using "Any" for this case. The type is not unknown--it's exactly what I said: "doesn't throw an exception when using the + operator". That type is just not expressible in Python's type hint system in the way that would be needed. And "Any" doesn't address this problem, because the article is not saying that every pair of objects should be addable.
> "I can't express 'type which supports the '+' operator'" is an insanely esoteric and unusual case
I don't see why. Addition is a very commonly used operation, and being able to have a type system that can express "this function takes two arguments that can be added using the addition operator" seems like something any type system that delivers the goods it claims to deliver ought to have.
> unsupported in many languages
Yes, which means many languages have type systems that claim to deliver things they can't actually deliver. They can mostly deliver them, but "mostly" isn't what advocates of using type systems in all programs claim. So I think the article is making a useful point about the limitations of type systems.
> it's disingenuous to use it as an excuse for why people shouldn't bother with type hinting at all.
The article never says that either. You are attacking straw men.
> I don't see why. Addition is a very commonly used operation, and being able to have a type system that can express "this function takes two arguments that can be added using the addition operator" seems like something any type system that delivers the goods it claims to deliver ought to have.
If your comparison is Rust, sure, but you can't even express this in Java. No, Java's type system is not great, but it's a type system that's been used for approximately 500 trillion lines of production code powering critical systems and nobody has ever said "Java sucks because I can't express 'supports the + operator' as a generic type". (It sucks for many other reasons.)
Again, it is factually and objectively an esoteric and unusual case. Nobody in the real world is writing generics like this, only academics or people writing programming blogs about esoterica.
If your argument is that all type systems are bad or deficient, fine, but calling out Python for this when it has the exact same deficiency as basically every other mainstream language is asinine.
> The article never says that either. You are attacking straw men.
The article says "Turning even the simplest function that relied on Duck Typing into a Type Hinted function that is useful can be painfully difficult." The subterfuge is that this is not even remotely close to a simple function because the type being expressed, "supports the + operator", is not even remotely close to a simple type.
> it is factually and objectively an esoteric and unusual case.
Sorry, but your unsupported opinion is not "factual and objective".
> If your argument is that all type systems are bad or deficient
I said no such thing, any more than the article did. Again you are attacking a straw man. (If you had said "limited in what they can express", I might buy that. But you didn't.)
I think I've said all I have to say in this subthread.
It's factual and objective that billions, if not trillions of lines of Java and Go have been deployed and the language still cannot express "supports the + operator" as a type constraint. In production, non-academic settings, people don't generally write code like that.
Again, this is an esoteric limitation from the perspective of writing code that runs working software, not a programming language theory perspective.
How many of those lines of code would have benefited from being able to express that type constraint, if the language made it possible?
You have no idea, and nor does anyone else. But that's what you would need "factual and objective" evidence about to support the claim you made.
By your argument, anything that programming languages don't currently support, must be an "esoteric limitation" because billions if not trillions of lines of code have been written without it. Which would mean programming languages would never add new features at all. But it's certainly "factual and objective" that programming languages add new features all the time. Maybe this is another feature that at some point a language will add, and programmers will find it useful. You don't even seem to be considering such a possibility.
> But the entire post is built upon the premise that accepting all types is good API design. Which it isn't, at all.
Was Tim Peters also wrong way back in the day when he counseled Guido van Rossum to allow floats to be added to integers without a cast, like other popular languages?
How is `float | int` anywhere close to equivalent to `Any`?
How is "responds to the `__add__` method" anywhere close to equivalent to `Any`?
If your implication is that "implementing __add__ means you can use the + operator", you are incorrect. This is a common Python beginner mistake, but it isn't really a Python type checking issue, this is complexity with Python built-ins and how they interact with magic methods.
My suggestion -- don't rely on magic methods.
This is a strange and aggressive bit of pedantry. Yes, you'd also need `__radd__` for classes that participate in heterogenous-type addition, but it's clear what was meant in context. The fundamentals are not all "beginner" level and beginners wouldn't be implementing operator overloads in the first place (most educators hold off on classes entirely for quite a while; they're pure syntactic sugar after all, and the use case is often hard to explain to beginner).
Regardless, none of that bears on the original `slow_add` example from the Reddit page. The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way. Because the rule is something like "anything that says it can be added according to the protocol — which in practical terms is probably any two roughly-numeric types except for the exceptions, and also most container types but only with other instances of the same type, and also some third-party things that represent more advanced mathematical constructs where it makes sense".
And saying "don't rely on magic methods" does precisely nothing about the fact that people want the + symbol in their code to work this way. It does suggest that `slow_add` is a bad thing to have in an API (although that was already fairly obvious). But in general you do get these issues cropping up.
Dynamic typing has its place, and many people really like it, myself included. Type inference (as in the Haskell family) solves the noise problem (for those who consider it a problem rather than something useful) and is elegant in itself, but just not the strictly superior thing that its advocates make it out to be. People still use Lisp family languages, and for good reason.
But maybe Steve Yegge would make the point better.
> This is a strange and aggressive bit of pedantry.
There's nothing pedantic about it. That's how Python works, and getting into the nuts and bolts of how Python works is precisely why the linked article makes type hinting appear so difficult.
> The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way.
As the post explores, your intuition is also incorrect. For example, as the author discovers in the process, addition via __add__/__radd__ is not addition in the algebraic field sense. There is no guarantee that adding types T + T will yield a T. Or that both operands are of the same type at all, as would be the case with "adding" a string and int. Or that A + B == B + A. We can't rely on intuition for type systems.
> your intuition is also incorrect.
No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense. When language designers are revisiting the decision to use the `+` symbol to represent string concatenation, how many of them are thinking about algebraic fields, seriously?
And all of this is exactly why you can't just say that it's universally bad API design to "accept all types". Because the alternative may entail rejecting types for no good reason. Again, dynamically typed languages exist for a reason and have persisted for a reason (and Python in particular has claimed the market share it has for a reason) and are not just some strictly inferior thing.
> you can't just say that it's universally bad API design to "accept all types"
Note, though, that that's not really the API design choice that's at stake here. Python will still throw an exception at runtime if you use the + operator between objects that don't support being added together. So the API design choice is between that error showing up as a runtime exception, vs. showing up as flagged by the type checker prior to runtime.
Or, to put it another way, the API design choice is whether or not to insist that your language provide explicit type definitions (or at least a way to express them) for every single interface it supports, even implicit ones like the + operator, and even given that user code can redefine such interfaces using magic methods. Python's API design choice is to not care, even with its type hinting system--i.e., to accept that there will be interface definitions that simply can't be captured using the type hinting system. I personally am fine with that choice, but it is a design choice that language users should be aware of.
> No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense.
Huh? There's no restriction in Python's type system that says `+` has to "make sense".
> banana and mango smoothie> <Response [200]>
So we have Fruit + Fruit = Smoothie. Overly cute, but sensible from a CS101 OOP definition and potentially code someone might encounter in the real world, and demonstrates how not all T + T -> T. And we have Fruit + number = requests.Response. Complete nonsense, but totally valid in Python. If you're writing a generic method `slow_add` that needs to support `a + b` for any two types -- yes, you have to support this nonsense.
I guess that's the difference between the Python and the TypeScript approach here. In general, if something is possible, valid, and idiomatic in JavaScript, then TypeScript attempts to model it in the type system. That's how you get things like conditional types and mapped types that allow the type system to validate quite complex patterns. That makes the type system more complex, but it means that it's possible to use existing JavaScript patterns and code. TypeScript is quite deliberately not a new language, but a way of describing the implicit types used in JavaScript. Tools like `any` are therefore an absolute last resort, and you want to avoid it wherever possible.
When I've used Python's type checkers, I have more the feeling that the goal is to create a new, typed subset of the language, that is less capable but also easier to apply types to. Then anything that falls outside that subset gets `Any` applied to it and that's good enough. The problem I find with that is that `Any` is incredibly infective - as soon as it shows up somewhere in a program, it's very difficult to prevent it from leaking all over the place, meaning you're often back in the same place you were before you added types, but now with the added nuisance of a bunch of types as documentation that you can't trust.
> My suggestion -- don't rely on magic methods.
So no e.g. numpy or torch then?
> The thing that finally got me on board with optional type hints in Python was realizing that they're mainly valuable as documentation.
> But it's really valuable documentation! Knowing what types are expected and returned just by looking at a function signature is super useful.
So ... you didn't have this realisation prior to using Python type hints? Not from any other language you used prior to Python?
I didn't. I've been mainly a Python, PHP and JavaScript programmer for ~25 years and my experience with typed languages was mostly pre-type-inference Java which felt wildly less productive than my main languages.
> I didn't. I've been mainly a Python, PHP and JavaScript programmer for ~25 years
Maybe its time you expanded your horizons, then. Try a few statically typed languages.
Even plain C gives you a level of confidence in deployed code that you will not get in Python, PHP or Javascript.
Maybe if your C has aggressive test coverage and you’re using Valgrind religiously and always checking errno when you’re supposed to and you’re checking the return value of everything. Otherwise lol. C as it’s written by middling teams is a soup of macros, three-star variables, and questionable data structure implementations, where everybody fiddles with everybody else’s data. I’ll take good C over bad Python, but good C is rare.
> C as it’s written by middling teams is a soup of macros, three-star variables, and questionable data structure implementations, where everybody fiddles with everybody else’s data. I’ll take good C over bad Python, but good C is rare.
Ironically, the worst production C written in 2025 is almost guaranteed to be better than the average production Python, Javascript, etc.
The only people really choosing C in 2025 are those with a ton of experience under their belt, who are comfortable with the language and its footguns due to decades of experience.
IOW, those people with little experience are not choosing C, and those that do choose it have already, over decades, internalised patterns to mitigate many of the problems.
At the end of the day, in 2025, I'd still rather maintain a system written in a statically typed language than a system written in a dynamically typed language.
> The only people really choosing C in 2025 are those with a ton of experience under their belt, who are comfortable with the language and its footguns due to decades of experience.
Experienced users of C can't be the only people who use it if the language is going to thrive. It's very bad for a language when the only ones who speak it are those who speak it well. The only way you get good C programmers is by cultivating bad C programmers, you can't have one without the other. If you cut off the bad programmers (by shunning or just not appealing to them, or loading your language with too many beginner footguns), there's no pipeline to creating experts, and the language dies when the experts do.
The people who come along to work on their legacy systems are better described as archaeologists than programmers. COBOL of course is the typical example, there's no real COBOL programming community to speak of, just COBOL archeologists who maintain those systems until they too shall die and it becomes someone else's problem, like the old Knight at the end of Indiana Jones.
> Experienced users of C can't be the only people who use it if the language is going to thrive.
I don't think it's going to thrive. It's going to die. Slowly, via attrition, but there you go.
I find automated tests give me plenty of confidence in the Python code I deploy. I'd rather deploy a codebase with comprehensive tests and no types over one with types and no tests.
I've been dabbling with Go for a few projects and found the type system for that to be pleasant and non-frustrating.
I feel like Go is a very natural step from Python because it's still pretty easy and fast to start with.
(and if you want to embrace static types, the language starting with them might get advantages over an optional backwards compatible type system)
You may have read this already but the biggest surprise one of the Go creators had was Go was motivated by unhappiness with C++, and they expected to get C++ users, but instead people came from Python and Ruby: https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
> I'd rather deploy a codebase with comprehensive tests and no types over one with types and no tests.
With Python, PHP and Javascript, you only option is "comprehensive tests and no types".
With statically typed languages, you have options other than "types with no tests". For example, static typing with tests.
Don't get me wrong; I like dynamically typed languages. I like Lisp in particular. But, TBH, in statically typed languages I find myself producing tests that test the business logic, while in Python I find myself producing tests that ensure all callers in a runtime call-chain have the correct type.
BTW: You did well to choose Go for dipping your toes into statically typed languages - the testing comes builtin with the tooling.
That was more because of Java than static typing.
Type hints as documentation are a gateway drug to type hints for bug finding. Keep at it :)
This is a naive realization. When type checking is used to the maximum extent they become as just as important as unit testing. It is an actual safety contribution to the code.
Many old school python developers don't realize how important typing actually is. It's not just documentation. It can actually roughly reduce dev time by 50% and increase safety by roughly 2x.
It's claims like that which used to put me off embracing type hints!
I'd been programming for 20+ years and I genuinely couldn't think of any situations where I'd had a non-trivial bug that I could have avoided if I'd had a type checker - claims like "reduce dev time by 50%" didn't feel credible to me, so I stuck with my previous development habits.
Those habits involved a lot of work performed interactively first - using the Python terminal, Jupyter notebooks, the Firefox/Chrome developer tools console. Maybe that's why I never felt like types were saving me any time (and in fact were slowing me down).
Then I had my "they're just interactive documentation" realization and finally they started to click for me.
It depends on the project. If you're working always on one project and you have all the time in the world to learn it (or maybe you wrote it), then you can get away with dynamic types. It's still worse but possible.
But if you aren't familiar with a project then dynamic typing makes it an order of magnitude harder to navigate and understand.
I tried to contribute some features to a couple of big projects - VSCode and Gitlab. VSCode, very easy. I could follow the flow trivially, just click stuff to go to it etc. Where abstract interfaces are used it's a little more annoying but overall wasn't hard and I have contributed a few features & fixes.
Gitlab, absolutely no chance. It's full of magically generated identifiers so even grepping doesn't work. If you find a method like `foo_bar` it's literally impossible to find where it is called without being familiar with the entire codebase (or asking someone who is) and therefore knowing that there's a text file somewhere called `foo.csv` that lists `bar` and the method name is generated from that (or whatever).
In VSCode it was literally right-click->find all references.
I have yet to succeed in modifying Gitlab at all.
I did contribute some features to gitlab-runner, but again that is written in Go so it is possible.
So in some cases those claims are not an exaggeration - static types take you from "I give up" to "not too hard".
> In VSCode it was literally right-click->find all references.
Flip side of this is that I hate trying to read code written by teams relying heavily on such features, since typically zero time was spent on neatly organizing the code and naming things to make it actually readable (from top to bottom) or grep-able. Things are randomly spread out in tiny files over countless directories and it's a maze you stumble around just clicking identifiers to jump somewhere. Where something is rarely matter as the IDE will find it. I never develop any kind of mental image of that style of code and it completely rules out casually browsing the code using simpler tools.
That hasn't been my experience at all. I think maybe it feels more like a maze because when you go-to-definition you often don't actually check where you are in the filesystem, so you don't build a mental map of the repo as quickly as you do when you are forced to manually search through all the files. But I wouldn't say that is better.
Kind of like how you don't learn an area when you always use satnav as quickly as you do when you manually navigate with paper maps. But do you want to go back to paper maps? I don't.
Static type checking (which is what I assume you mean by "typing") can also be a massive pain in the ass that stands in the way of incremental development, even if the end-goal is to ship an api with clear type signatures.
There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
> Many old school python developers don't realize how important typing actually is.
I don't think this is true. There's simply a communication breakdown where type-first developers don't see the benefits of disabling static checking to design interfaces, and interface-first developers don't see why they should put static checking ahead of interface iteration speed.
> Static type checking (which is what I assume you mean by "typing") can also be a massive pain in the ass that stands in the way of incremental development,
No they dont. There is nothing about types that would make incremental develpment harder. They keep having the same benefits when being incremental.
> There is nothing about types that would make incremental develpment harder.
Oh, please, this is either lack of imagination or lack of effort to think. You've never wanted to test a subset of a library halfway through a refactor?
Yes, type checkers are very good at tracking refactoring progress. If it turns out that you can proceed to test some subset, then congratulations, you found a new submodule.
I am continually astounded by the stubborn incuriosity of humans with a bone to pick.
What in the world are you talking about. Please specify how lack of types helped you in your aforementioned scenario.
I don't think it's a lack of curiosity from others. But it's more like fundamental lack of knowledge from you. Let's hear it. What is it are you actually talking about? Testing a subset of a library halfway though a refactor? How does a lack of types help with that?
> There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
My hunch is that the people who see no downsides whatsoever in static typing are those who mostly just consume APIs.
There are downsides. But the upsides outweigh the downsides.
I'm not a consumer of APIs. I've done game programming, robotics, embedded system development (with C++ and rust), (web development frontend with react/without react, with jquery, with angurar, with typescript, with js, zod) (web development backend with golang, haskell, nodejs typescript, and lots and lots of python with many of the most popular frameworks with flask + sqlalchemy, django, FastApi + pydantic, )
I've done a lot. I can tell you. If you don't see how types outweigh untyped languages, you're a programmer with experience heavily weighed toward untyped programming. You don't have balanced experience to make a good judgement. Usually these people have a "data scientist" background. Data analyst or data scientist or machine learning engineers... etc. These guys start programing heavily in the python world WITHOUT types and they develop unbalanced opinions shaped by their initial styles of programming. If this describes you, then stop and think... I'm probably right.
You are wrong. I learned programming mostly in C++ in the late 90's, and programmed in C, C++ and Java in professional settings for a decade or so, and still do from time to time.
Hm if you want or have time, can you give me a specific example of where no types are clearly superior to types? Maybe you can convince me but I still think your opinion is wrong despite your relevant experience.
>There are developers who design apis by trying to figure out readable invocations. These developers discover, rather than design, type hierarchies and library interfaces.
No, you're one of the old school python developers. Types don't hinder creativity, they augment it. The downside is the slight annoyance of updating a type definition and the run time definition vs. just updating the runtime definition.
Let me give you an example of how it hinders creativity.
Let's say you have a interface that is immensely complex. Many nested structures thousands of keys, and let's say you want to change the design by shifting 3 or 4 things around. Let's also say this interface is utilized by hundreds of other methods and functions.
When you move 3 or 4 things around in a complex interface you're going to break a subset of those hundreds of other methods or functions. You're not going to know where they break if you don't have type checking enabled. You're only going to know if you tediously check every single method/function OR if it crashes during runtime.
With a statically typed definition you can do that change and the type checker will identify EVERY single place where an additional change to the methods that use that type needs to be changed as well. This allows you to be creative and make any willy nilly changes you want because you are confident that ANY change will be caught by the type checker. This Speeds up creativity, while without it, you will be slowed down, and even afraid to make the breaking change.
You are basically the stereotype I described. An old school python developer. Likely one who got used to programming without types and now hasn't utilized types extensively enough to see the benefit.
>I don't think this is true. There's simply a communication breakdown where type-first developers don't see the benefits of disabling static checking to design interfaces, and interface-first developers don't see why they should put static checking ahead of interface iteration speed.
This is true. You're it. You just don't know it. When I say these developers don't know I'm literally saying they think like you and believe the same things you believe BECAUSE they lack knowledge and have bad habits.
The habit thing is what causes the warped knowledge. You're actually slowed down by types because you're not used to it as you spent years coding in python without types so it's ingrained for you to test and think without types. Adding additional types becomes a bit of a initial overhead for these types of people because their programming style is so entrenched.
Once you get used to it and once you see that it's really just a slight additional effort, than you will get it. But it takes a bit of discipline and practice to get there.
> It can actually roughly reduce dev time by 50% and increase safety by roughly 2x.
Type annotations don’t double productivity. What does “increase safety by 2×” even mean? What metric are you tracking there?
In my experience, the main non-documentation benefit of type annotations is warning where the code is assuming a value where None might be present. Mixing up any other kind of types is an extremely rare scenario, but NoneType gets everywhere if you let it.
> Type annotations don’t double productivity.
Obviously this post is still firmly in made up statistics land, but i agree with OP, in some cases they absolutely do.
New code written by yourself? No, probably not. But refactoring a hairy old enterprise codebase? Absolutely a 2×, 3× multiplier to productivity / time-to-correctness there.
>Type annotations don’t double productivity. What does “increase safety by 2×” even mean? What metric are you tracking there?
My own anecdotal metric. Isn't that obvious? The initial post was an anecdotal opinion as well. I don't see a problem here.
>In my experience, the main non-documentation benefit of type annotations is warning where the code is assuming a value where None might be present. Mixing up any other kind of types is an extremely rare scenario, but NoneType gets everywhere if you let it.
It's not just None. Imagine some highly complex object with nested values and you have some function like this:
wtf is direction object? Is it in Cartesian or is it in polar? Is in 2D or 3D? Most old school python devs literally have to find where modify_direction is called and they find this: Ok then you have to find where modify data is called, and so on and so forth until you get to here: And then boom you figure out what it does by actually reading all the complex quaternion math create_quat does.Absolutely insane. If I have a type, I can just look at the type to figure everything out... you can see how much faster it is.
Oh and get this. Let's say there's someone who feels euler angles are better. So he changes create_quat to create_euler. He modifies all the places create_quat is used (which is about 40 places) and he misses 3 or 4 places where it's called.
He then ships it to production. Boom The extra time debugging production when it crashes, ans also extra time tediously finding where create_quat was used. All of that could have been saved by a type checker.
I'm a big python guy. But I'm also big into haskell. So I know both the typing worlds and the untyped worlds really well. Most people who complain like you literally have mostly come from a python background where typing isn't used much. Maybe you used types occasionally but not in a big way.
If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
> If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
Or have enough experience to have lived e.g. the J2EE and C++ template hells and see where this is going.
typing can get extreme to the point where it becomes proof based typing. So I know what you mean here. I've lived through it and done it.
In general types outweigh no types EVEN with the above.
> My own anecdotal metric. Isn't that obvious? The initial post was an anecdotal opinion as well. I don't see a problem here.
WTF is “an anecdotal metric”‽ That just sounds like an evasive way to say “I want to make up numbers I can’t justify”.
> wtf is direction object? Is it in Cartesian or is it in polar? Is in 2D or 3D?
This seems very domain-specific.
> Most people who complain like you literally have mostly come from a python background where typing isn't used much. Maybe you used types occasionally but not in a big way.
> If you used both untyped languages and typed languages extensively you will know that types are intrinsically better. It's not even a contest. Anyone who still debates this stuff just lacks experience.
I’ve got many years of experience with static typed languages over a 25 year career. Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
> WTF is “an anecdotal metric”
It's a metric (how much more productive he is), and anecdotal (base only on his experience). Pretty obvious I would have thought.
> This seems very domain-specific.
It was an example from one domain but all domains have types of things. Are you really trying to say that only 3D games specifically would benefit from static types?
> Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
Clueless senior then I guess? Honestly I don't know how you can have this much experience and still not come to the obvious conclusion. Perhaps you only write small scripts or solo projects where it's more feasible to get away without static types?
What would you say to someone who said "I have 25 years of experience reading books with punctuation and I think that punctuation is a waste of time. Just because you disagree with me doesn't mean I'm clueless."?
>WTF is “an anecdotal metric”‽ That just sounds like an evasive way to say “I want to make up numbers I can’t justify”.
What I have to have scientific papers for every fucking opinion I have? The initial Parent post was an anecdotal opinion. Your post is an opinion. I can't have opinions here without citing a scientific paper that's 20 pages long and no is going to read but just blindly trust because it's "science"? Come on. What I'm saying is self evident to people who know. There are thousands of things like this in the world where people just know even though statistical proof hasn't been measured or established. For example eating horse shit everyday probably isn't healthy even though it there isn't SCIENCE that proves this action as unhealthy directly. Type checking is just one of those things.
OBVIOUSLY I think development is overall much better, much faster and much safer with types. I can't prove it with metrics, but I'm confident my "anecdotal" metrics with I prefaced with "roughly" are "roughly" ballpark trueish.
>This seems very domain-specific.
Domain specific? Basic orientation with quaternions and euler angles is specific to reality. Orientation and rotations exist in reality and there are thousands and thousands of domains that use it.
Also the example itself is generic. Replace euler angles and quats with vectors and polar coordinates. Or cats and dogs. Same shit.
>I’ve got many years of experience with static typed languages over a 25 year career. Just because somebody disagrees with you, it doesn’t mean they are a clueless junior.
The amount of years of experience is irrelevant. I know tons of developers with only 5 years of experience who are better than me and tons of developers with 25+ who are horrible.
I got 25 years as well. If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact. It's not an insult. It just means for a specific thing they don't have experience or knowledge which is typical. I'm sure there's tons of things where you could have more experience. Just not this topic.
If you have experience with static languages it likely isn't that extensive. You're likely more of a old school python guy who spend a ton of time programming without types.
> What I have to have scientific papers for every fucking opinion I have?
No, but if you’re going to say things like “increase safety by roughly 2x” then if you can’t even identify the unit then you are misleading people.
It’s absolutely fine to have an opinion. It’s not fine to make numbers up.
> I'm confident my "anecdotal" metrics with I prefaced with "roughly" are "roughly" ballpark trueish.
Okay, so if it’s 1.5×, 2.0×, or 2.5×… again, what metric? What unit are we dealing with?
You’re claiming that it’s “in the ballpark”, but what is “in the ballpark”? The problem is not one of accuracy, the problem is that it’s made up.
> If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact.
It’s not a fact, it’s ridiculous. You genuinely believe that if somebody disagrees with you, it’s a fact that they lack knowledge and experience? It’s not even remotely possible for somebody to have an informed difference of opinion with you?
>No, but if you’re going to say things like “increase safety by roughly 2x” then if you can’t even identify the unit then you are misleading people.
So when I talk about multipliers I have to have a unit? What is the unit of safety? I can't say something like 2x more safe? I just have to say more safe? What if I want to emphasize that it can DOUBLE safety?
Basically with your insane logic people can't talk about productivity or safety or multipliers at the same time because none of these concepts have units.
Look I told YOU it's anecdotal, EVERYONE can read it. You're no longer "deceived" and no one else is.
>Okay, so if it’s 1.5×, 2.0×, or 2.5×… again, what metric? What unit are we dealing with?
If you don't have the capacity to understand what I'm talking about without me specifying a unit than I'll make one up:
I call it safety units. The amount of errors you catch in production. That's my unit: 1 caught error in prod in a year. For Untyped languages let's say you catch about 20 errors a year. With types that goes down to 10.
>It’s not a fact, it’s ridiculous. You genuinely believe that if somebody disagrees with you, it’s a fact that they lack knowledge and experience? It’s not even remotely possible for somebody to have an informed difference of opinion with you?
What? and you think all opinions are equal and everyone has the freedom to have any opinion they want and no one can be right or wrong because everything is just an opinion? Do all opinions need to be fully respected even though it's insane?
Like my example, if you have the opinion that eating horse shit is healthy, I'm going to make a judgement call that your opinion is WRONG. Lack of Typing is one of these "opinions"
Take a step back and look at what you are saying:
> If someone disagrees with me (on this specific topic), it absolutely doesn't mean they are a junior. It means they lack knowledge and experience. This is a fact.
You think it’s impossible for anybody to have an informed opinion that disagrees with yours. You literally think yours is the only possible valid opinion. If that doesn’t set off big warning bells in your head, you are in dire need of a change in attitude.
This conversation is not productive, let’s end it.
>You think it’s impossible for anybody to have an informed opinion that disagrees with yours. You literally think yours is the only possible valid opinion. If that doesn’t set off big warning bells in your head, you are in dire need of a change in attitude.
I mean do you think we should have a fair and balanced discussion about the merits of child molestation and rape? We should respect other people's opinion and not tell them they are wrong if there opinion differs? That's what I think of your opinion. I think your opinion is utterly wrong, and I do think my opinion is the valid opinion.
Now that doesn't mean I disrespect your opinion. That doesn't mean your not allowed to have a different opinion. It just means I tell you straight up, you're wrong and you lack experience. You're free to disagree with that and tell me the exact same thing. I'm just blunt, and I welcome you to be just as blunt to me. Which you have.
The thing I don't like about you is that you turned it into discussion about opinions and the nature of holding opinions. Dude. Just talk about the topic. If you think I'm wrong. Tell me straight up. Talk about why I'm wrong. Don't talk about my character and in what manner I should formulate opinions and what I think are facts.
>This conversation is not productive, let’s end it.
I agree let's end it. But let's be utterly clear. YOU chose to end it with your actions by shifting the conversation into saying stuff like "you literally think yours is the only possible opinion." Bro. All you need to do is state why you think my opinion is garbage and prove it wrong. That's the direction of the conversation, you ended it by shifting it to a debate on my character.
I really love Python for it's expedience, but type hints still feel like they don't belong in the language. They don't seem to come with the benefits of optimisation that you get with static typed languages. As someone who uses C and Julia (and wishes they had time for Rust), introducing solid typing yields better end results at a minimum, or is a requirement at the other end of the scale.
The extra typing clarification in python makes the code harder to read. I liked python because it was easy to do something quickly and without that cognitive overhead. Type hints, and they feel like they're just hints, don't yield enough of a benefit for me to really embrace them yet.
Perhaps that's just because I don't use advanced features of IDEs. But then I am getting old :P
EDIT: also, this massively depends on what you're doing with the language! I don't have huge customer workloads to consider any longer..!
> The extra typing clarification in python makes the code harder to read
It’s funny, because for me is quite the opposite: I find myself reading Python more easily when there are type annotations.
One caveat might be: for that to happen, I need to know that type checking is also in place, or else my brain dismissed annotations in that they could just be noise.
I guess this is why in Julia or Rust or C you have this stronger feeling that types are looking after you.
I think the face they fundamentally don't look after you is where my resistance comes from. Will try and evaluate some newer code that uses them and see how I get on a bit more :)
> They don't seem to come with the benefits of optimisation that you get with static typed languages
They don't. And cannot, for compatibility reasons. Aside from setting some dunders on certain objects (which are entirely irrelevant unless you're doing some crazy metaprogramming thing), type annotations have no effect on the code at runtime. The Python runtime will happily bytecode-compile and execute code with incorrect type annotations, and a type-checking tool really can't do anything to prevent that.
Now that python has a jit it could use them (not saying it should) for speculative compilation
My understanding is that currently python can collect type data in test runs and use it to inform the jit during following executions
> Now that python has a jit it could use them (not saying it should) for speculative compilation
I'd forgotten about that. Now that you mention it, my understanding is that this is actually the plan.
> I don't use advanced features of IDEs
I use vanilla vim (no plugins) for my editor, and still consider type hints essential.
Interesting that for you typing makes the code harder to read. What context do you use Python for? And who is writing it?
In my experience I have seen far too much Python code like
`def func(data, args, *kwargs)`
with no documentation and I have no clue wtf it's doing. Now I am basically all in on type hints (except cases where it's impossible like pandas).
They catch bugs. And you don't have to use them; even if they're only provided by libraries, there is a benefit to users.
Remembering project where type hints would have been helpful to grok the code I do now mostly like them. They are useful when you come back after days or weeks and try to remember what does this function produce and what does this one actually take in.
And Python always was rather strongly typed, so you anyway had to consider the types. Now you get notes. Which often do help.
> The extra typing clarification in python makes the code harder to read.
It depends what you mean by "read". If you literally mean you're doing a weird Python poetry night then sure they're sort of "extra stuff" that gets in the way of your reading of `fib`.
But most people think of "reading code" and reading and understanding code, and in that case they definitely make it easier.
As someone who has read code as easily as English for decades (which is apparently rare, if my co-workers are any indication), too many type annotations clutter it up and make it a lot harder to read. And this is after having used Typescript a lot in the past year and liking that system - it works well because so much can be inferred.
Python also has type inference. I don't think it really has more type annotation "noise" than Typescript does.
I enforce strong types on all Python code I’m responsible for - and make sure others don’t play fast and loose with dict[str, Any] when they could use a well defined type.
Doing otherwise is just asking for prod incidents.
I worked on a project that did this. Drove me absolutely nuts. It's like having all the worst parts of a dynamic language and a static language with none of the benefits.
I'd much rather just work in a statically typed language from the start.
I too would much rather work in a statically typed language, but sometimes you have to work with what you’ve got.
These systems are part of the core banking platform for a bank so I’d rather some initial developer friction over runtime incidents.
And I say initial friction because although developers are sometimes resistant to it initially, I’ve yet to meet one who doesn’t come to appreciate the benefits over the course of working on our system.
Different projects have different requirements, so YMMV but for the ones I’m working on type hints are an essential part of ensuring system reliability.
I'm not opposed to type hints, I use them everywhere. It's specially the strict linting.
But it's a fair point. If you truly have no option it's better then absolutely nothing. I really wish people would stop writing mission critical production code in Python.
I feel like it's more often a result of suffering from success that leads to these situations, rather than a lack of foresight to begin with.
For example I work on a python codebase shared by 300+ engineers for a popular unicorn. Typing is an extremely important part of enforcing our contracts between teams within the same repository. For better or for worse, python will likely remain the primary language of the company stack.
Should the founder have chosen a better language during their pre-revenue days? Maybe, but at the same time I think the founder chose wisely -- they just needed something that was _quick_ (Django) and capable of slapping features / ecosystem packages on top of to get the job done.
For every successful company built on a shaky dynamic language, there's probably x10 more companies that failed on top of a perfect and scalable stack using static languages.
A statically typed language doesnt prevent developers from using the equivalent of dict[str, Any].
Well, some do. Let's not pretend all static type systems are the same.
Type hints seem fantastic for when you're in maintenance mode and want to add sanity back to a system via automated tooling.
However for new projects I find that I'd much rather pick technologies that start me off with a sanity floor which is higher than Python's sanity ceiling. At this point I don't want to touch a dynamically typed language ever again.
Always fun to inherit a data-scientist derrived chunk of pytjon code for which every type hint is 'takes a dataframe' and 'returns a dataframe'..
What exactly drove you nuts? The python ecosystem is very broad and useful, so it might be suitable for the application (if not, reasonable that you'd be frustrated). With strict mypy/pyright settings and an internal type-everything culture, Python feels statically typed IME.
It's not even close compared to working with Java or Go or any language built with static typing in mind.
To be clear, I'm not opposed to type hints. I use them everywhere, especially in function signatures. But the primary advantage to Python is speed (or at least perceived speed but that's a separate conversation). It is so popular specifically because you don't have to worry about type checking and can just move. Which is one of the many reasons it's great for prototypes and fucking terrible in production. You turn on strict type checking in a linter and all that goes away.
Worse, Python was not built with this workflow in mind. So with strict typing on, when types start to get complicated, you have to jump through all kinds of weird hoops to make the checker happy. When I'm writing code just to make a linter shut up something is seriously wrong.
Trying to ad typing to a dynamic language in my opinion is almost always a bad idea. Either do what Typescript did and write a language that compiles down to the dynamic one, or just leave it dynamic.
And if you want types just use a typed language. In a production setting, working with multiple developers, I would take literally almost any statically typed language over Python.
But TypeScript erases (its) types at runtime, exactly like Python. Python is Python's TypeScript. Whether you want TS or JS-like semantics is entirely dependent on whether you use a type checker and whether you consider its errors a build breaker.
I'm not sure what you're trying to say here. If you mean Python's type annotations are erased at runtime... Okay? It still has runtime type information. It's not "erasure" as that term applies to Java for example. And Typescript compiles down to JavaScript, so obviously it's runtime behavior is going to be the same as JavaScript.
In my view it's always a mistake to try and tac static typing on top of a dynamic one. I think TS's approach is better than Python's, but still not nearly as good as just using a statically typed language.
The fact that the types are reflected at runtime is what makes FastAPI/Pydantic possible, letting us use Python types to define data models used for serialization, validation, and generating. In TypeScript, we have to use something like Zod, instead of "normal" TypeScript types, because the types are not reflected at runtime.
I think a couple of things have to be untangled here.
The problem we are talking about in both Python and TS comes from the fact that they are (or compile down to) dynamic languages. These aren't issues in statically typed languages... because the code just won't compile it it's wrong and you don't have to worry about getting data from an untyped library.
I don't know a lot about Zod, but I believe the problem you are referring to is more about JavaScript then TS. JavaScript does a LOT of funky stuff at runtime, Python thank God actually enforces some sane type rules at runtime.
My point was not about how these two function at runtime. My point was that if you want to tac static typing onto a dynamic language, Typescripts approach is the better one, but even if can't fix the underlying issues with JS.
You could take a similar approach in Python. We could make a language called Tython, that is statically typed and then compiles down to Python. You eliminate an entire class of bugs at compile time, get a far more reliable experience then the current weirdness with gradual typing and linters, and you still get Pythons runtime type information to deal with things like interopt with existing Python code.
Typescript requires a compiler to produce valid Javascript. Python 3 shoved types into Python 3 without breaking backwards compatibility I think.
You would never have typing.TYPE_CHECKING to check if type checking is being done in TypeScript, for example, because type hints can't break Javascript code, something that can happen in Python when you have cyclic imports just to add types.
Not it doesn’t. It doesn’t throw errors, but they’re still introspectable in python, unlike typescript
I would say mypy is better than nothing but it still misses things sometimes, and makes some signatures difficult or impossible to write. I use it anyway, but patched-on static typing (Erlang, Clojure, and Racket also have it) seems like a compromise from the get-go. I'd rather have the type system designed into the language.
Mypy is trash but Pyright is very good.
I went from mypy to pyright to basedpyright and just started checking out pyrefly (the OP), and it's very promising. It's written in Rust so it's very efficient.
You know you can just use a compiled language with statically checked types, right?
For the kind of work I'm using Python for (computer vision, ML), not really. The ecosystem isn't there and even when it's possible it would be much less productive for very little gain. Typed Python actually works quite well in my experience. We do use C++ for some hand-written things that need to be fast or use libraries like CGAL, but it has a lot of disadvantages like the lack of a REPL, slow compile times and bad error messages.
Python is the second most popular programming language in the world. It's not that easy to avoid.
Typescript turned me into a believer but my gosh do python typings feel clumsy and quickly busy up files. I get why, and it’s not exactly realistic, but I wish a lot of it didn’t require an import.
Whatever the solution is, it doesn’t include giving up on Python typings.
With the newest Python versions, most of the time I don't need typing imports!
Yeah post 3.10 you don't need Union, Optional, List, Duct, Tuple. Any still necessary when you want to be permissive, and I'm still hoping for an Unknown someday...
> hoping for an Unknown someday
Wouldn't that just be `object` in Python?
No, because the type checker should prevent you interacting with `Unknown` until you tie it down, but `object` is technically a valid type
Exactly, I want it to complain if I try to manipulate the fields/methods of an unknown object.
By default, Mypy warns you if try to reassign a method of any object[1]. It will also warn you when you access non-existent attributes[2]. So if you have a variable typed as `object`, the only attributes you can manipulate without the type checker nagging are `__doc__`, `__dict__`, `__module__`, and `__annotations__`. Since there are very few reasons to ever reassign or manipulate these attributes on an instance, I think the `object` type gets us pretty darn close to an "unknown" type in practice.
There was a proposal[3] for an unknown type in the Python typing repository, but it was rejected on the grounds that `object` is close enough.
[1]: https://mypy.readthedocs.io/en/stable/error_code_list.html#c...
[2]: https://mypy.readthedocs.io/en/stable/error_code_list.html#c...
[3]: https://github.com/python/typing/issues/1835
That's what I use it for. If you type something as `object`, static type checkers can just narrow down the exact typing later.
> List, Duct, Tuple...
I'm aware this is just a typo but since a lot of the Python I write is in connection with Airflow I'm now in search of a way to embrace duct typing.
I am glad they improved this but I still like Optional[], and to a lesser extent, Union[]. It's much more readable to have Optional[str] compared to str | None.
I disagree with `Optional`. It can cause confusion in function signatures, since an argument typed as "optional" might still be required if there is no default value. Basically I think the name is bad, it should be `Nullable` or something.
I believe Python's own documentation also recommends the shorthand syntax over `Union`. Linters like Pylint and Ruff also warn if you use the imported `Union`/`Optional` types. The latter even auto-fixes it for you by switching to the shorthand syntax.
Option is a pretty common name for this, as is Maybe.[^1] Either way I think that ship is sailed in Python.
[^1]: https://en.wikipedia.org/wiki/Option_type
Python types - all the onus of static types, with none of the performance!
I enjoy packages like pydantic and SOME simple static typing, but if I’m implementing anything truly OOP, I wouldn’t first reach for Python anyway; the language doesn’t even do multiple constructors or public/private props.
Edit: as a side note, I was interested to learn that for more verbose type specification, it’s possible to define a type in variable-like syntax at the top: mytype = int|str|list|etc.
There is an important (I would say primary) benefits of types that isn't performance: it's making a program structure [you | your IDE | LLMs] can reason about.
Agreed, it definitely improves my experience when the compiler “knows” the variable types.
The most annoying part is that the type checking exists outside the regular runtime. I constantly run into situation where the type checker is happy, but the thing explodes at runtime or the type checker keeps complaining about working code. And different type checkers will even complain about different things regularly too. It's a whole lot of work making every part of the system happy and the result still feels extremely brittle.
They ARE only “hints” afterall.
> with none of the performance!
If you care about micro-optimizations, the first one that overwhelms everything else is to not use Python.
Anyway, if your types are onerous, you are using them wrong. Even more in a progressive type system where you always have the option of not using them or putting an "Any" there.
While most responders seem to have latched on to the pedantics of my “oop” comment - this is what my comment was intended to imply: why force the verbosity of static types on Python when statically typed (sic compiled) languages typically have much better runtime performance? the real answer i suspect is for quality of life/readability of code, but they are just “hints” afterall.
Ruby doesn't have multiple constructors and literally everything in Ruby is an object so it's partically impossible to avoid "doing OOP". I don't see how being "truely OOP" has anything to do with the language supporting method overloads.
One shouldn't be implementing anything "trully OOP" to begin with...
Quotes speak louder than words… However it’s hard to say “what one should or shouldn’t” be implementing in general terms.
I find the type hints harder and slower than java/C/...
try typing a decorator, or anything using file IO
I find it extremely difficult, if not impossible, and I did type theory
(the type checkers being really, really stupid doesn't help either)
So don’t? Just annotate the stuff that’s not annoying. I agree, IO is annoying to type. Decorators are fine, it’s just a function that returns a function. It’s still a win, however much you decide to annotate.
What does "multiple constructors" buy you that you can't get from multiple static methods that return an object of the enclosing class's type.
Maybe I'm missing out on something cool...
Perhaps just a fuzzy feeling. I suppose I haven’t tried using static methods for that purpose. Will give it a shot!
Python types - all the onus of static types, with none of the power of calculus of constructions. /s
I believe types are a great way to encourage good practices with relatively little investment. They provide type-safety, act as living documentation, and add an extra layer of protection in production.
However, in a large codebase, consistency can become a challenge. Different developers often approach the same problem in different ways, leading to a mix of type patterns and styles, especially when there’s no clear standard or when the problem itself is complex.
With the rise of LLM-generated code, this issue becomes even more pronounced — code quality and craftsmanship can easily degrade if not guided by proper conventions.
I recently discovered that the folks building uv and ruff are also building a type checker, and it already works really well (and fast!), despite beta status:
https://github.com/astral-sh/ty
My experience adding types to un-typed Python code has convinced me that static typing should be required for anything more complicated than a single purpose script. Even in old and battle tested code bases so many tiny bugs and false assumptions are revealed and then wiped out.
It's not perfect in Python, and I see some developers introduce unnecessary patterns trying to make type-"perfect" `class Foo(Generic[T, V])` (or whatever) abstractions where they are not really necessary. But if the industry is really going all-in on Python for more than scripting, so should we for typed Python.
I know I am going to be in the minority, but I don't understand why we can't let Python be Python. Static typing is great, and there are already other statically typed languages for all your needs. Why not use them?
Well, at least it doesn't create two incompatible Pythons like async and (I assume) free threading.
I used to be of the same opinion, but after giving type hints a real try, I changed my mind.
You should not see type hints as real, hard types, but more as a kind of documentation that helps your linter and type checker catch mistakes.
Because Python has a lot of things it's great at (numeric stuff, ML, computer vision, scripting), and with types you can actually rely on it to work. It's the best of both worlds.
I sometimes felt that Python was rather strong in many parts of typing. As such being able to track what type of something is would often have been useful. Instead of waiting it to crash to some error.
Like back in 2.7 difference between byte-array and string... Offloading such cases is mentally useful.
> Why not use them?
Because you can now use typing WITH the entire Python ecosystem.
Or you could just use a statically typed language and get a much better experience.
A entire class of bugs, wiped out by a thing called a "compiler". Gigahours of downtime and bug fixing globally prevented by a modest extra step up front. Great stuff.
I had someone say to me they preferred strict type checking in a Python linter over a statically typed language because they "don't like a build step"...
Dudes it's literally just worse compilation with extra steps.
I'm in favor of partitioning the set of reasons it can fail to compile into separate checks with separate tools. Taming the zoo of tooling is extra work, but smaller more focused tools are easier to work with once you understand their relationship to their neighbors.
There's a world of difference between:
> I've been using a different type checker and I like it, you should try it
And
> I'd like to switch our project to a different compiler
The former makes for more nimble ecosystem.
Yes, if you are stuck with Python something is certainly better than nothing. But we shouldn't be writing large production apps in it in the first place.
Should we be writing large apps at all?
I don't know how to respond to this. Yes, large software projects do in fact exist...
Sometimes that is not an option, consider the bias which python gets due to all the LLM libraries being python first.
Yeah, I thought lack of typing in python was intentional to support this another paradigm of programming for geniuses who don't need hand holding.
Turns out they just didn't know any better?
How would a static type system have anything to do with handholding?
Would you? Why?
Python has a great experience for a bunch of tasks and with typing you get the developer experience and reliability as well.
No you don't. You get the illusion of static types without the actual upsides.
For any even medium sized project or anything where you work with other developers a statically typed language is always going to be better. We slapped a bunch of crap on Python to make it tolerable, but nothing more.
I disagree and I've been using Haskell professionally for ten years so I know what I'm talking about when it comes to types. Typed Python isn't perfect but it's totally workable with medium sized projects and gives you access to a great ecosystem.
Everyone knows Haskell is only used for white papers :p
Yeah it's workable, and better than nothing. But it's not better than having an actual static type system.
1. It's optional. Even if you get your team on board you are inevitably going to have to work with libraries that don't use type hints
2. It's inconsistent, which makes sense given that it's tacked onto a language never intended for it.
3. I have seen some truly goofy shit written to make the linter happy in more complex situations.
I honestly think everything that's been done to try to make Python more sane outside outside scripting or small projects (and the same applies to JS and TS) are a net negative. Yes it has made those specific ecosystems better and more useful, but it's removed the incentive to move to better technology/languages actually made to do the job.
What job are we talking about and why is TypeScript or typed Python actually bad at it?
I'd say Typescript/JavaScript on the backend are a bad idea across the board. That's not really because of this conversation, just in general.
The comment about Typescript was really about JavaScript. It's a patch on top of JavaScript, which is a shit show and should have been replaced before it ended up forming the backbone of the internet.
Python, typed or otherwise, isn't good for anything past prototyping, piping a bunch of machine learning libraries together, or maybe a small project. The minute the project gets large or starts to move towards actual production software Python should be dropped.
Happily using both in production here, guess I'm just hallucinating.
I'm not saying you _can't_ do it. You could write production software in Bash if you really wanted to. I'm saying there are much better options.
I don't think writing your frontend in Rust instead of TypeScript or your computer vision pipeline in Java would be better at all.
> Why?
Python’s 3 traditional weak spots, which almost all statically-typed languages do better: performance, parallelism and deployment.
None of those things are to do with typing. Python is slow because it's interpreted, bad at parallelism because of the GIL (which is going away), and bad at deployment because of the messy ecosystem and build tools (although it's pretty good with Nix). Conversely other languages aren't good at those because of static typing.
It is a great choice though for many problems where performance isn't critical (or you can hand the hard work off to a non-Python library like Numpy or Torch). Typing just makes it even better.
In addition to what others have mentioned, it also just makes it easier to come back later to a code base and make changes, especially refactoring. In many cases you don't even really have to add many type hints to get benefits from it, since many popular libraries are more-or-less already well-typed. It can also substitute for many kinds of unit tests that you would end up writing even 5 years ago. If you're an infrastructure engineer or data scientist that's usually just writing a lot of glue code, then it greatly helps speed up your output (I've found)
This
Without typing it is literally 100x harder to refactor your code, types are like a contract which if are maintained after the refactor gives you confidence. Over time it leads to faster development
I really think that Rust has one of the best designed/inspired type systems.
If I had to rewrite a Python project, I would consider Rust or another statically typed language before choosing to continue in a dynamic language with types bolted on. I hope the situation improves for dynamic languages with optional types, but it still feels weird and bolted onto the language because it is.
> or another statically typed language
I'm a professional .Net Core developer, but I'd throw my hat in the ring for Swift on this one. While obviously not exactly a 1:1 with Rust, there is definitely some common benefits between the two. Though, from what I understand of Rust (very little), its typing system is slightly more strict than Swift's which is slightly more strict than C#'s.
Python's dynamic nature can make it quite difficult to express some things correctly. That, or the type checkers have issues when it comes to understanding what would be considered safe in other languages. Years ago when I knew far less about types and programming, I never had such problems in for example Java. It was sometimes stupid, but I always found a way to express things. Although it could also be, that I merely want more out of inference and and safety. For example recently I wanted a pipeline of steps, but the steps could have any input and output type, as long as that type aligns with the previous step's types and the type checker should also know what the final output type is, and I additionally wanted it to work so that I don't have to add all the steps at once, so that I can construct the pipeline step by step. Tried for hours, but didn't find a working solution that type checks. Also tried with the help of LLMs, which gave superficially looking great code for this, but then there was always some type error somewhere, and they struggled to fix that. Ultimately, I gave up on the type checking between steps and output type of the pipeline, as I realized, that I invested hours into something that might be impossible or way waaay too much work for what I get from it. I would not have spent any time on this without type annotating and would have simply gone with a dynamic solution.
That doesn't sound like it'd have something to do with the dynamic nature of python. Type checking is a static analysis of the source code, so if you'd want something to be inferred dynamically, then you'll have to make use of generics:
You could further constrain the generic type through type variables: https://docs.python.org/3/library/typing.html#typing.TypeVarI think this pipeline implementation does some things different from what I wanted (but did not precisely describe. It seems that each step is run right away, as it is "added", rather than collected and run when `terminate` is called. Also each step can only consume the result of the previous step, not the results of earlier steps. This can be worked around, by ending the pipeline and then starting multiple pipelines from the result of the first pipeline, if needed. I think you would need to import Generic and write something like `class Pipeline(Generic[T]):` as well? Or is `class Pipeline[T]:` a short form of that?
In my experiment I wanted to get a syntax like this:
So then I would need generics for `Step` too and then Pipeline would need to change result type with each call of `add_step`, which seems like current type checkers cannot statically check.I think your solution circumvents the problem maybe, because you immediately apply each step. But then how would the generic type work? When is that bound to a specific type?
> Or is `class Pipeline[T]:` a short form of that?
Yes, since 3.12.
> Pipeline would need to change result type with each call of `add_step`, which seems like current type checkers cannot statically check.
Sounds like you want a dynamic type with your implementation (note the emphasis). Types shouldn't change at runtime, so a type checker can perform its duty. I'd recommend rethinking the implementation.
This is the best I can do for now, but it requires an internal cast. The caller side is type safe though, and the same principle as above applies:
Thanks for your efforts. I didn't expect and couldn't expect anyone to invest time into trying to make this work. I might try your version soon.
I'm founding a company that is building an AOT compiler for Python (Python -> C++ -> object code) and it works by propagating type information through a Python function. That type propagation process is seeded by type hints on the function that gets compiled:
https://blog.codingconfessions.com/i/174257095/lowering-to-c...
This sounds even worse than Modular/Mojo. They made their language look terrible by trying to make it look like Python, only to effectively admit that source compatibility will not really work any time soon. Is there any reason to believe that a different take on the same problem with stricter source compatibility will work out better?
Have you talked to anyone about where this flat out will not work? Obviously it will work in simple cases but someone with good language understanding will probably be able to point out cases where it just won't. I didn't read your blog so apologies if this is covered. How does this compiler fit into your company business plan?
Our primary use case is cross-platform AI inference (unsurprising), and for that use case we're already in production by startups to larger co's.
It's kind of funny: our compiler currently doesn't support classes, but we support many kinds of AI models (vision, text generation, TTS). This is mainly because math, tensor, and AI libraries are almost always written with a functional paradigm.
Business plan is simple: we charge per endpoint that downloads and executes the compiled binary. In the AI world, this removes a large multiplier in cost structure (paying per token). Beyond that, we help co's find, eval, deploy, and optimize models (more enterprise-y).
I understood some of it. Sounds reasonable if your market already is running a limited subset of the language, but I guess there is a lot of custom bullshit you actually wind up maintaining.
Yup that's true. We do benefit from massive efficiencies though, thanks to LLM codegen.
I was extremely skeptical when typing was introduced. What's the point if runtime ignores them. I forced myself to use them as documentation and now I'm on the other side of the spectrum, all code should have them. It already helped me a lot during refactors and I always wished more code had types. I think the same is true for AIs, they will benefit from having the types explicitly said. It's a shame they currently default to untyped Python.
> What's the point if runtime ignores them.
I've been using this sparingly: https://pypi.org/project/type-enforced/
> What's the point if runtime ignores them.
Ideally, with static checking, the runtime shouldnn’t need to care about types because cide that typechecks shouldn’t be capable of not behaving according to the types declared.
Python, even with the most restrictive settings in nost typecheckers, may not quite achieve that, but it certainly reeuces the chance of surprises lf that kind compared to typing information in docstrings, or just locked away in the unststed assumptions of some developer.
The thing is, when typing was being discussed, I was hoping it would lead to JavaScript-like evolution, where the dynamic nature of Python could be restricted if I use the right types, and a JIT compiler could optimize parts of the code, expecting u32 ints instead of PyObjects.
Type hints in Python add a great amount of visual noise to the code, and I actively avoid them wherever possible. If static typing is a must, use a language where static typing is not an afterthought, and let Python be Python.
Would you rather deal with a little visual noise or a runtime exception that you could've caught before code got to production? For me it's about tradeoffs, and so far the tradeoff has been well worth it.
I guess one man's noise is another man's treasure :P
Indeed, I almost can't read untyped python code these days. It just feels like "what the hell is going on here?" and "what is this object?" ever so often. Sorry to say that but most people who write python just aren't good API designers, or software engineers in general, and type hints can at least help others get a vague idea of what the intent was.
[dead]
My view on typing in Python (a language I have used for decades) is that if I wanted types I would use a language designed from the ground up with strong and consistent typing built-in. Not bolt on a sort of type system which actively fights against the way I use the language on a day to day basis.
I use plenty of statically typed languages, Python's type hinting does not bring me joy.
> Not bolt on a sort of type system which actively fights against the way I use the language on a day to day basis.
Can you help me out with an example of a Python usage pattern against which the type system seems to be fighting?
Ok, so this is just one of many examples but the most immediate one is where I don't care about the immutable sanctity of the variable I have just declared.
I often use Python for data munging and I'll frequently write code that goes
Where the type of the value being assigned to foo is different each time. Now, obviously (in this simplistic example that misses subtleties) I could declare a new variable for each transformation step or do some composite type building type thing or refactor this into separate functions for each step that requires a different type but all of those options are unnecessary busy work for what should be a few simple lines of code.Thanks, now I get why you feel like the type system is fighting your style of programming.
> all of those options are unnecessary busy work for what should be a few simple lines of code
If you re-type your variable often, then how do you make sure you’re really keeping track of all those types?
If you re-type it only a few times, then I’m not entirely convinced that declaring a few additional variables really constitutes busywork.
Small example with additional variables instead of re-typing the same variable:
This survives strict type checking (`mypy --strict`). I don’t feel that renaming the variables introduces much noise or busywork here? One might argue that renaming even adds clarity?pyright will accept this. mypy should accept this when using --allow-redefinition-new as well
> mypy should accept this when using --allow-redefinition-new as well
TIL, thank you!
A huge supporter of typing in python, but have observed that type sense which pycharm provides out of the box has its fair share of bugs as well.
Also should check out ty by astral, it is pretty fast and does a good job at typechecking.
https://docs.astral.sh/ty/editors/
The biggest reason I use typehints is that VSCode's intellisense relies on them - and I know I've missed one when typing a dot doesn't give me the method I'm expecting.
I hate typing in Python. I spend a good chunk of my day fighting the type checker and adding meaningless assertions, casts, and new types all to satisfy what feels like an obsessive compulsive nitpicker. "Type partially unknown" haunts my dreams.
Duck typing is one of the best things about Python. It provides a developer experience second to none. Need to iterate over a collection of things? Great! Just do it! As long as it is an iterable (defined by methods, not by type) you can use it anywhere you want to. Want to create a data object that maps any hashable type to just about anything else? Dict has you covered! Put anything you want in there and don't worry about it.
If we ended up with a largely bug free production system then it might be worth it, but, just like other truly strongly typed languages, that doesn't happen, so I've sacrificed my developer experience for an unfulfilled promise.
If I wanted to use a strongly typed language I would, I don't, and the creeping infection of type enforcement into production codebases makes it hard to use the language I love professionally.
Couldn't agree more! I've been using Python for almost 20 years, my whole career is built on it, and I never missed typing. Code with type hints is so verbose and unpythonic, making it much harder to read. Quite an annoying evolution.
As the article says, type hints represent a fundamental change in the way Python is written. Most developers seem to prefer this new approach (especially those who’d rather be writing Java, but are stuck using Python because of its libraries).
However it is indeed annoying for those of us who liked writing Python 2.x-style dynamically-typed executable pseudocode. The community is now actively opposed to writing that style of code.
I don’t know if there’s another language community that’s more accepting of Python 2.x-style code? Maybe Ruby, or Lua?
There is nothing python-2 about my python-3 dynamically typed code. I'm pretty confident a majority of new python code is still being written without type hints.
Hell, python type annotations were only introduced in python 3.5, the language was 24 years old by then! So no, the way I write python is the way it was meant to be, type hints are the gadget that was bolted on when the language was already fully matured, it's pretty ridiculous painting code without type hints as unpythonic, that's the world upside down.
If I wanted to write very verbose typed code I would switch to Go or Rust. My python stays nimble, clean and extremely readable, without type hints.
> Hell, python type annotations were only introduced in python 3.5
Mypy was introduced with support for both for Python 2.x and 3.x (3.2 was the current) using type comments before Python introduced a standard way of using Python 3.0’s annotation syntax for typing; even when type annotations were added to Python proper, some uses now supported by them were left to mypy-style type comments in PEP 484/Python 3.5, with type annotations for variables added in PEP 526/Python 3.6.
I agree completely! To be clear, I don’t consider describing code as “Python 2-style” to be a bad thing. It’s how I describe my own Python code!
Overall, I have found very few Python 3 features are worth adopting (one notable exception is f-strings). IMO most of them don’t pull their weight, and many are just badly designed.
> Duck typing is one of the best things about Python.
And duck typing with the expected contract made explicit and subject to static verification (and IDE hinting, etc.) is one of the best things about Python typing.
> If we ended up with a largely bug free production system then it might be worth it, but, just like other truly strongly typed languages, that doesn't happen
I find I end up at any given level of bug freeness with less effort and time with Python-with-types than Python-without-types (but I also like that typing being optional means that its very easy to toss out exploratory code before settling on how something new should work.)
> I find I end up at any given level of bug freeness with less effort and time with Python-with-types than Python-without-types
Same.
Type hints also basically give me a "don't even bother running this if my IDE shows type warnings" habit that speeds up python development.
Absence of warnings doesn't guarantee me bug-free code but presence of warnings pretty much guarantees me buggy code.
Type hints are a cheap way to reduce (not eliminate) run time problems.
> Need to iterate over a collection of things?
Iterable[T]
> Want to create a data object that maps any hashable type to just about anything else?
Mapping[T, U]
Beyond the advantage that a type-checker/linter can tell if you're doing the right thing when writing those functions, it lets an IDE infer what type you're iterating over, in order to provide more support/completion/hinting/checks (without recursively analyzing arbitrary code, so: 'instantly' vs 'maybe not ever).
For some reason, I find typing in Python to be more ergonomic and "get out of your way" compared to Typescript.
But they both seem to handle typing similarly.
I can't put my finger on why. Anybody else?
TypeScript is kind of all or nothing.
It's not quite all or nothing, but it's annoying to work with it if you only use it for some things and not for others. I find that if you have a mixture of TS and JS in various files I would rather just go all in on TypeScript so I don't have to manually annotate.
With Python you're still just working with Python files.
Type hints in python are just that, hints. Use them to help with clarity but enforcing them and requiring them everywhere generally leads to the worst of all worlds. Lots of boilerplate, less readable code and throwing away many of the features that make python powerful. Use the best language for the job and use the right language features at the right time. I see too many black or white arguments in the developer community, there is a middle ground and the best code is almost always written there.
> Lots of boilerplate, less readable code and throwing away many of the features that make python powerful.
IMO, that complaint almost always goes with overuse of concrete types when abstract (Protocol/ABC) types are more accurate to the function of the code.
There was a time that that was a limitation in Python typing, but that that hasn’t been true for almost as long as Python typing had been available at all before it stopped being true.
> Use the best language for the job
Sometimes you don't have a choice though, and other people have picked Python despite it rarely being the best language for any job.
In that case it's nice to be able to use static type hints and benefit from improved readability, productivity and reliability.
Is it me or is everything slowly moving to strong types but don't want to commit?
For PHP it slowly got introduced in php5.4 and now it's expected to type hint everything and mark the file strict with linters complaining left and right that "you're doing a bad job by having mixed-type variables"
In Ruby you get Sorbet or RBS.
What is JavaScript? Oh, you mean TypeScript.
and so on ..
My take is that if you need strong types switch to a language with strong types so you can enjoy some "private static final Map<String, ImmutableList<AttemptType>> getAttemptsBatches(...)"
in general I see two types of Python in the wild: - simple self-contained scripts, everything is a free function, no type hints - over-engineered hierarchies of classes spread over dozens of files and modules, type hints everywhere
I personally largely prefer the first kind, but it seems even the standard formatting rules are against it (two empty lines between free functions etc.)
Type hints in dynamic languages are great, but I wish they came with deeper integration into the language runtime for validation and for optimizer setup.
If I have a function that takes an int, and I write down the requirement, why should a JIT have to learn independently of what I wrote down that the input is an int?
I get that it's this way because of how these languages evolved, but it doesn't have to stay this way.
That was the original intent of mypy, to allow a subset of Python to be interpreted by a JIT or transpiled to a compiled, statically typed language.
The type hints proved to be useful on their own so the project moved past what was useful for that purpose, but a new JIT (such as the one the upcoming CPython 3.14 lays the groundwork for) could certainly use them.
For me, type hints are mainly useful because they're the only reliable way to get decent IDE auto-completion. Beyond that, they feel like a bolted-on compromise that goes against the spirit of Python. If you really need strict typing, you're probably better off using a statically typed language.
JSDoc plays a similar role with Javascript. Moreover it is supported out of the box by VSCode, so add a few JSDoc comments to your types and functions, and intellisense instantly kicks in.
I liked python in my early days because it felt simple and easy and when I tried other languages having to deal with types felt so annoying... but then I grew and had to work with bigger codebases and guess what - having types (and static type checking during compilatin) helps A LOT... :)
Has anyone had good luck with auto-annotation of types of existing codebases? Either via LLM or via various runtime hooks? I work in a codebase that started off in Python 2 and isn't annotated with types in the majority of places, and I feel the pain every time I have to wonder what exactly the arguments to a method is.
Yep, I've used this pretty successfully, the ideal is to run it under realistic prod traffic over time to capture as many types as can flow into a given function, but a good set of unit/integration tests can also provide good coverage.
And if you re-use the same type store (SQLite DB) across multiple instrumented runs, you can further improve it.
https://github.com/Instagram/MonkeyType
RightTyper is much better in addition to running orders of magnitude faster.
https://github.com/RightTyper/RightTyper
(full disclosure, I am one of its authors).
I think I've always used Python type hints, but that was partially because early versions of Discord.py relied on them (maybe it still does). But it was also because I like to be able to mentally verify my code's correctness before running it, and waiting for runtime errors is comparatively a huge waste of time (in my opinion).
My permanent instructions to Claude are:
* Always strongly type, for local variables, method parameters, and return types
* Avoid Any unless absolutely required
* hasattr() and get() are often code smells; if the type can be known, use that type
* Use beartype for all methods.
I _love_ beartype and want to plug it to everyone on HN: https://github.com/beartype/beartype
I'm building my own coding agent, like Claude, and it is built with opinionated style. Strongly typing Python and using beartype are what it will try to do unless the user specifies otherwise.
Misleading title. I was expecting to read about why devs actually embrace type hints. What the article is about is why you should and how you can use type hints. That's valuable, but different from what the title suggests.
Besides, the lack of static typic is what makes a lot of the appeal for beginners. It's much harder to convince a non-CS beginner why they should bother with the extra burden of type hits. They are optional anyway and just slow folks down (so they might think). Careful with generally demanding that everybody use them.
But they probably help coding assistants to make fewer mistakes, so maybe that will soon be an argument if it isn't already. (That's an angle I expected in the article.)
The reason to use type hints is simple: it vastly improves scalability, making LLM agents much less error prone. Try it: tell Claude to type hint every function (e.g., in your Claude.md file) and see how much easier it is to scale your agents.
This also works for humans, but many python programmers who learned python before type hints can't be bothered. :sad_panda:
Boring old me would be reaching for mojo in order to have types that actually are "real" rather than just an editing overlay of decorators/DSL/tooling.
Too bad Mojo gave up on Python compatibility on code level. Now it’s just one of a dozen “Python inspired” languages, with the only benefit that they aim for easily calling into other Python code.
It's the other way around, IMO: people who think the language should at least have type hints are now more willing to use it, now that there's better tooling for checking those hints.
Yes. And people who preferred the language (and its culture) before the "fundamental change" of adding type hints are now less willing to use it.
Probably some people are. I just mostly don't think about the annotations.
I’d love to see an analysis of how much typing hurts LLMs that need to read / edit your code (due to increased context) vs helps (due to more clear type context).
I want to believe that corrected typed python code is easier for smaller models to generate / interact with, but who knows how the trade-offs actually work out.
Types are invaluable in modern code bases because they end up saving a ton of tokens when agentic coding tools are trying to comprehend let alone modify the code. Python isn't intrinsically a great language for LLMs to work with, except in practice it is because they've had a great deal of training data for python. Type hints help a lot with this.
I like the type hints. The're not perfect and they've changed a lot between versions, but they really help catch issues early that you'd usually need to write unit tests for. Adding type hints is easier than writing those unit tests.
Then you can focus your tests on more interesting things
You just need to set your build up to actually do the checking as type hints by default are just documentation
My main complaint about them is no first-party support for type checking, you need external packages like beartype decorators.
Yeah, it would have been much better to have them be default enforces if present. Keeping them optional is fine, but I don't get the use-case for "you can add them but not check them"... that just leads to actively misleading hints
Use them for what they are (hints, documentation). Use it for gradual typing when implementation makes it hard to understand return or parameters types. But don't enforce it across your code base, use another language or another mindset instead.
More likely "Python developers phone-in AI code that uses type hints".
How do type hints work if for example you import a library that has not implemented type hints into your project in which you hope to have type hints? Do you just manually assign types to the outputs of this library?
I like it but in my experience a lot of teams use them loosely without a type checker, more for understanding than for correctness. Reason for that is largely that it can be difficult to make the checkers happy…
A dynamic language with type hints has all of the disadvantages of a dynamic language with all the disadvantages of a static language on top of it.
This is why I think Julia will win in the long run. It has an amazing type system, simple yet powerful. In particular, abstract types are much easier to define and use than abstract classes in Python.
Ooh, I completely disagree. Julia has a worse type system overall, IMO. The big downside of Julia is that Julia has no interfaces or protocols. So, you can't type assert that something is an iterable of integers, for example.
Another issue is that abstract types are completely undocumented and have no tooling support. You say it's easier to use an abstract type. Can you tell me what I need to define to create a working subtype of AbstractDict? Or Number? Or IO? It's completely undefined, and the only way to do it is to just define the type and then try it out and patch when it breaks because a method was missing.
Finally, there is no multiple inheritance. That means I can't define something which is both a subtype of AbstractArray and IO, for example.
There are no traits in Julia by default, that's true. But since types are first class citizens in Julia, traits can be implemented within the language. There is a package SimpleTraits.jl that implements Holy's trait trick, see also this tutorial https://ahsmart.com/pub/holy-traits-design-patterns-and-best...
An ability to work with types within the language is already a win for me.
I just don't see why people use Python in the AI era. Statically typed languages help AI reason about code enormously during compilation.
Company selling a product based on what's being glazed in the article.
It's always a small vocal fraction or they'd be using a different language.
I doubt that Meta (the company that sponsors the work on pyrefly) is looking forward to selling a product based on Python typing (assuming that's what's "what's being glazed in the article").
Beyond the usual advantages of Python’s type hints, it seems they can also improve code suggestions and completions when working with AI coding tools.
I used python on a large code base for quite a while. Many team members did not like type hints, and a codebase that doesn't maintain type hints makes it harder to use them.
However, if I had a choice, rather than use typehints in python, I would much rather just use a statically typed language. Short, tiny scripts in python? Sure. Anything that grows or lives a long time? Use something where the compiler helps you out.
Shocker. Static typing wins once again.
Typings have pretty cleanly been getting added to PHP over the last decade. I'm kind of surprised Python's are so bolted on by comparison
Python with type hints still lacks the performance benefits of static typing in a compiled language setting?
It's the developer performance benefit of catching type bugs early, not the application performance benefit from a compiler, that Python developers find compelling
You generally don't write Python if you want it to be really fast anyway (non-Python parts like numpy notwithstanding).
Yes. It also lacks the ease-of-deployment of a compiled language.
Because you almost always have a specific type in mind for that function parameter, so you might as well just write it down.
It's optional, and that's great, because its necessity or benefit is situational.
Python is amazing for that reason. You choose whether you want to do it or not.
Not willingly, but it's a lost cause getting AI generated code to avoid it.
I'm always surprised when people suggest using a different language if you want typing in Python. Python's (second?) largest appeal is probably its extensive ecosystem. Whenever people suggest just changing languages, I wonder if they work in isolation, without the need for certain packages or co-worker proficiency in that language.
I think people usually say it for a different reason. Types are not enforced. You can annotate your code that looks correct to the type checker, but the actual data flow at runtime can be with different types.
And it happens quite often in large codebases. Sometimes external dependencies report wrong types, e.g., a tuple instead of a list. It's easy to make such a mistake when a library is written in a compiled language and just provides stubs for types. Tuples and lists share the same methods, so it will work fine for a lot of use cases. And since your type checker will force you to use a tuple instead of a list, you will never know that it's actually a list that can be modified unless you disable type checking and inspect the data.
To be pedantic compiled languages only check types at compile time as well. If you have a C library that takes void* then it can easily go wrong at runtime.
Typing has only been around since python 3.5. As someone who has formally learned 2.7 in university when 3.0 had already been around for a few years, I suppose there are many who still lag years behind what the language can do due to old codebases and fears of incompatibility.
It’s ok. The mild ground of typing feels pretty clunky to me though
Definitely not a best of both world type outcome
They are optional until Pydantic is forced on you by some required library.
I started using Python at work with type hints. I am very satisfied.
Is there a type checker that works with numbers.Number?
I am suspicius of every codder who does not appreciate types.
an underrated benefit of type hints is how much better my copilot autocompletions are on type hinted code.
One can hope that cpython will use them, one day
"if a parameter is typed as an int, then only run the specialized 'int' code to process it"
This would increase performance and make typing more useful
Without significant language changes, this is not possible. While your code may be typed as an int, I can simply redefine what int means. I can also modify the code in your method.
Yes
I guess it would work with the ongoing jit work, which (as far as I understood..) run the code "as usual", then notice that a specific variable is always a dict (or whatever). Then it patches the code to run the dict-optimized code by default (and fallback to the generic code if, somehow, the variable is no longer a dict).
With typing, the generic code could be avoided altogether. The algorithm would be:
This would: (or perhaps this is a stupid idea that cannot work :) )There are already some bits of this with specific bytecode and the upcoming jit, it's not using annotations at all though
I think you're forgetting that int is actually what other people call a BigInt, an integer with unlimited precision, not int32 or int64.
Isn't this a very 2018-2019 topic? I don't get the motivation to write about it in 2025, it's like telling people to use version control.
https://xkcd.com/1053/
Type hints are much easier to use nowadays than they were a few years ago, because the agentic tools like Claude Code are very good at converting an existing codebase to using type hints.
The flip side of it is that Claude Code will have a very bad time in a code base with grossly unsatisfiable or conflicting types (where a type checker would fail the project). A human should always first ensure that the types are broadly correct, with or without the assistance of code tools.
I can empathize with the code tools. Sometimes I’ll read Python code and have no idea at first glance if these are type bugs or creative coding by the dev. Python is incredibly flexible. Though I think most of the time you really shouldn’t be using the flexibility.
Because they follow any corporate initiative that gives them something to do, even if the type hints are the most unreadable and hackish form of typing in existence.
I recently had to debug someone else's Python code and trying to figure out what variables are was a massive headache, especially coming from C++.
I do not program that much in python, but I believe the general accepted wisdom in dynamic languages was explicit name and load of documentations (as comments and docstrings).
Absolutely the general accepted wisdom in line with best practices - that are often ignored
> explicit name and load of documentations (as comments and docstrings).
Which can be out of date are often missing. Might as well use type-hints that can be statically checked.
If the name of a function and its docstring is out of date, then what you have is a bad culture for coding. It’s up there with god classes in OOP.
In Python, every variable is either defined or imported in the file in which it's used, so you always know where to find it. (Assuming you don't do `from foo import *`, which is frowned upon.)
In C++, a variable might be defined in a header or in a parent class somewhere else, and there's no indication of where it came from.
How does this help when trying to determine the parameters a function takes? You have to either hope that the name is descriptive enough or that the function is well-documented. Failing that, you need to read the code to find out.
CMD-click shows you.
Even c++ feels clunky in terms of types to me. Though that's probably down to me preferring a more complete type system like haskell has
Ruby has had static typing via RBS for a while now, and I don't know if it's because I'm primarily a Rails developer and DHH doesn't like static typing so using these with Rails feels third-class or maybe just that I'm really just all the way in "the ruby way" but it feels antithetical to a dynamically typed language to start shoehorning in static types. Even as a type definition in a separate file it just feels wrong.
Ruby particularly is already strongly typed so there isn't too many suprises with automatic conversions or anything like that. RBS also just makes metaprogramming more annoying - and Ruby's ability to do metaprogramming easily is one of its biggest strengths in my opinion.
If I wanted a statically typed language I would just use a statically typed language.
I've been working with Python for years (since 2014) and typing makes the code less buggy and easier to maintain. I also would hardly call it "shoehorning", as years of design went into it.
Honestly the only people I see who really push back against it are the people who haven't bothered learning it. Once people use it for a bit, in my experience at least, they don't want to go back.
Maybe it's just because Python is just kindof a lousy language to use in the first place. I started with Java and C++, did Python for a bit and switched to Ruby and never looked back. Being forced to use Python for anything feels like a punishment.
Years of design also went into Ruby's type system, and for the people that enjoy it - be my guest - but I would never use it for my own code.
Only lazy and lame developers can prefer dynamic type.
[dead]
With duck typing, you don’t need type hints. You’re just supposed to use the variable however you “feel” it should be used when you’re using it, and it automagically just works!!! Try it! /s
Oh datatype hints. I ignored the headline on first scan, thinking it was about font metrics.
They do remove an entire class of avoidable errors in code bases of course people will embrace it.
Python developers are embracing RuntimeError.. It's amazing language that had it place, now it's time to sunset it and forget it. Leave it to prototyping only