Because the flexibility has been a boon and not a problem. The problem only comes when you try to express everything in the type system, that is third party (the type checkers for it) and added on top.

It's a boon if the goal is to write code then go home. It's a loaded footgun if the goal is to compose a stack and run it in production within SLO.

Python type hints manage to largely preserve the flexibility while seriously increasing confidence in the correctness, and lack of crashing corner cases, of each component. There's really no good case against them at this point outside of one-off scripts. (And even there, I'd consider it good practice.)

As a side bonus, lack of familiarity with Python type hints is a clear no-hire signal, which saves a lot of time.

I think with types there is a risk of typing things too early or too strictly or types nudging one to go in a direction, that reduces the applicability and flexibility of the final outcome. Some things can be difficult to express in types and then people choose easier to type solutions, that are not as flexible and introduce more work later, when things need to change, due to that inflexibility or limited applicability.

People say this all the time, but I've never seen any data proving it's true. Should be rather easy too, I'm at a big company and different teams use different languages. The strictly typed languages do to have fewer defects, and those teams don't ship features any faster than the teams using loosely typed languages.

What I've experienced is that other factors make the biggest difference. Teams that write good tests, have good testing environments, good code review processes, good automation, etc tend to have fewer defects and higher velocity. Choice of programming language makes little to no difference.

>It's a boon if the goal is to write code then go home. It's a loaded footgun if the goal is to compose a stack and run it in production within SLO.

Never has been an issue in practice...

Did you forget /s at the end of this?

I work at big tech and the number of bad deploys and reverts I've seen go out due to getting types wrong is in the hundreds. Increased type safety would catch 99% of the reverts I've seen.

Also have fun depending on libraries 10 years old as no one likes upgrades over fear of renames.

Ops type here, I’ve got multiple stories where devs have screwed up with typing and it’s caused downstream problems.

> Because the flexibility has been a boon and not a problem

Well, you could say that the problem in this case was the lack of documentation, if you wanted. The type signature could be part of the documentation, from this point of view.

Let me give a kind-of-concrete example: one year I was working through a fast.ai course. They have a Python layer above the raw ML stuff. At the time, the library documentation was mediocre: the code worked, there were examples, and the course explained what was covered in the course. There were no type hints. It's free (gratis), I'm not complaining. However, once I tried making my own things, I constantly ran into questions about "can this function do X" and it was really hard to figure out whether my earlier code was wrong or whether the function was never intended to work with the X situation. In my case, type hints would have cleared up most of the problems.

> the lack of documentation

If the code base expects flexibility, trusting documentation is the last thing you'd want to do. I know some people live and die by the documentation, but that's just a bad idea when duck typing or composition is heavily used for instance, and documentation should be very minimal in the first place.

When a function takes a myriad of potential input, "can this function do X" is an answer you get by reading the function or the tests, not the prose on how it was intended 10 years ago or how some other random dev thinks it works.

Documentation doesn’t have to be an essay. A simple, automatically generated reference with proper types goes a long way to tell me „it can do that“ as opposed to „maybe it works lol“. That’s not the level of engineering quality I’m going for in my work.

This whole discussion is about how you might not want to be listing every single types a function accepts. I also kinda wonder how you automatically generate that for duck typing.

Generally using the Protocol[1] feature

    from typing import Protocol

    class SupportsQuack(Protocol):
        def quack(self) -> None: ...
This of course works with dunder methods and such. Also you can annotate with @runtime_checkable (also from typing) to make `isinstance`, etc work with it

[1]: https://typing.python.org/en/latest/spec/protocol.html

You're then creating a Protocol for every single function that could rely on some duck typing.

Imagine one of your function just wants to move an iterator forward, and another just wants the current position. You're stuck with either requiring a full iterator interface when only part of it is needed or create one protocol for each function.

In day to day life that's dev time that doesn't come back as people are now spending time reading the protocol spaghetti instead of reading the function code.

I don't deny the usefulness of typing and interfaces in stuff like libraries and heavily used common components. But that's not most of your code in general.

For the collections case in particular, you can use the ABCs for collections that exist already[1]. There's probably in your use case that satisfies those. There's also similar things for the numeric tower[2]. SupportsGE/SupportsGT/etc should probably be in the stdlib but you can import them from typeshed like so

    from __future__ import annotations

    from typing import TYPE_CHECKING

    if TYPE_CHECKING:
        from _typeshed import SupportsGT
---

In the abstract sense though, most code in general can't work with anything that quack()s or it would be incorrect to. The flip method on an penguin's flipper in a hypothetical animallib would probably have different implications than the flip method in a hypothetical lightswitchlib.

Or less by analogy, adding two numbers is semantically different than adding two tuples/str/bytes or what have you. It makes sense to consider the domain modeling of the inputs rather than just the absolute minimum viable to make it past the runtime method checks.

But failing that, there's always just Any if you legitimately want to allow any input (but this is costly as it effectively disables type checking for that variable) and is potentially an indication of some other issue.

[1]: https://docs.python.org/3.14/library/collections.abc.html

[2]: https://docs.python.org/3/library/numbers.html

> You're then creating a Protocol for every single function that could rely on some duck typing.

No, you are creating a Protocol (the kind of Python type) for every protocol (the descriptive thing the type represents) that is relied on for which an appropriate Protocol doesn’t already exist. Most protocols are used in more than one place, and many common ones are predefined in the typing module in the standard library.

[deleted]