If your implication is that "implementing __add__ means you can use the + operator", you are incorrect. This is a common Python beginner mistake, but it isn't really a Python type checking issue, this is complexity with Python built-ins and how they interact with magic methods.
This is a strange and aggressive bit of pedantry. Yes, you'd also need `__radd__` for classes that participate in heterogenous-type addition, but it's clear what was meant in context. The fundamentals are not all "beginner" level and beginners wouldn't be implementing operator overloads in the first place (most educators hold off on classes entirely for quite a while; they're pure syntactic sugar after all, and the use case is often hard to explain to beginner).
Regardless, none of that bears on the original `slow_add` example from the Reddit page. The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way. Because the rule is something like "anything that says it can be added according to the protocol — which in practical terms is probably any two roughly-numeric types except for the exceptions, and also most container types but only with other instances of the same type, and also some third-party things that represent more advanced mathematical constructs where it makes sense".
And saying "don't rely on magic methods" does precisely nothing about the fact that people want the + symbol in their code to work this way. It does suggest that `slow_add` is a bad thing to have in an API (although that was already fairly obvious). But in general you do get these issues cropping up.
Dynamic typing has its place, and many people really like it, myself included. Type inference (as in the Haskell family) solves the noise problem (for those who consider it a problem rather than something useful) and is elegant in itself, but just not the strictly superior thing that its advocates make it out to be. People still use Lisp family languages, and for good reason.
But maybe Steve Yegge would make the point better.
> This is a strange and aggressive bit of pedantry.
There's nothing pedantic about it. That's how Python works, and getting into the nuts and bolts of how Python works is precisely why the linked article makes type hinting appear so difficult.
> The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way.
As the post explores, your intuition is also incorrect. For example, as the author discovers in the process, addition via __add__/__radd__ is not addition in the algebraic field sense. There is no guarantee that adding types T + T will yield a T. Or that both operands are of the same type at all, as would be the case with "adding" a string and int. Or that A + B == B + A. We can't rely on intuition for type systems.
No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense. When language designers are revisiting the decision to use the `+` symbol to represent string concatenation, how many of them are thinking about algebraic fields, seriously?
And all of this is exactly why you can't just say that it's universally bad API design to "accept all types". Because the alternative may entail rejecting types for no good reason. Again, dynamically typed languages exist for a reason and have persisted for a reason (and Python in particular has claimed the market share it has for a reason) and are not just some strictly inferior thing.
> you can't just say that it's universally bad API design to "accept all types"
Note, though, that that's not really the API design choice that's at stake here. Python will still throw an exception at runtime if you use the + operator between objects that don't support being added together. So the API design choice is between that error showing up as a runtime exception, vs. showing up as flagged by the type checker prior to runtime.
Or, to put it another way, the API design choice is whether or not to insist that your language provide explicit type definitions (or at least a way to express them) for every single interface it supports, even implicit ones like the + operator, and even given that user code can redefine such interfaces using magic methods. Python's API design choice is to not care, even with its type hinting system--i.e., to accept that there will be interface definitions that simply can't be captured using the type hinting system. I personally am fine with that choice, but it is a design choice that language users should be aware of.
> No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense.
Huh? There's no restriction in Python's type system that says `+` has to "make sense".
import requests
class Smoothie:
def __init__(self, fruits):
self.fruits = fruits
def __repr__(self):
return " and ".join(self.fruits) + " smoothie"
class Fruit:
def __init__(self, name):
self._name = name
def __add__(self, other):
if isinstance(other, Fruit):
return Smoothie([self._name, other._name])
return requests.get("https://google.com")
if __name__ == "__main__":
print(Fruit("banana") + Fruit("mango"))
print(Fruit("banana") + 123)
> banana and mango smoothie
> <Response [200]>
So we have Fruit + Fruit = Smoothie. Overly cute, but sensible from a CS101 OOP definition and potentially code someone might encounter in the real world, and demonstrates how not all T + T -> T. And we have Fruit + number = requests.Response. Complete nonsense, but totally valid in Python. If you're writing a generic method `slow_add` that needs to support `a + b` for any two types -- yes, you have to support this nonsense.
I guess that's the difference between the Python and the TypeScript approach here. In general, if something is possible, valid, and idiomatic in JavaScript, then TypeScript attempts to model it in the type system. That's how you get things like conditional types and mapped types that allow the type system to validate quite complex patterns. That makes the type system more complex, but it means that it's possible to use existing JavaScript patterns and code. TypeScript is quite deliberately not a new language, but a way of describing the implicit types used in JavaScript. Tools like `any` are therefore an absolute last resort, and you want to avoid it wherever possible.
When I've used Python's type checkers, I have more the feeling that the goal is to create a new, typed subset of the language, that is less capable but also easier to apply types to. Then anything that falls outside that subset gets `Any` applied to it and that's good enough. The problem I find with that is that `Any` is incredibly infective - as soon as it shows up somewhere in a program, it's very difficult to prevent it from leaking all over the place, meaning you're often back in the same place you were before you added types, but now with the added nuisance of a bunch of types as documentation that you can't trust.
If your implication is that "implementing __add__ means you can use the + operator", you are incorrect. This is a common Python beginner mistake, but it isn't really a Python type checking issue, this is complexity with Python built-ins and how they interact with magic methods.
My suggestion -- don't rely on magic methods.
This is a strange and aggressive bit of pedantry. Yes, you'd also need `__radd__` for classes that participate in heterogenous-type addition, but it's clear what was meant in context. The fundamentals are not all "beginner" level and beginners wouldn't be implementing operator overloads in the first place (most educators hold off on classes entirely for quite a while; they're pure syntactic sugar after all, and the use case is often hard to explain to beginner).
Regardless, none of that bears on the original `slow_add` example from the Reddit page. The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way. Because the rule is something like "anything that says it can be added according to the protocol — which in practical terms is probably any two roughly-numeric types except for the exceptions, and also most container types but only with other instances of the same type, and also some third-party things that represent more advanced mathematical constructs where it makes sense".
And saying "don't rely on magic methods" does precisely nothing about the fact that people want the + symbol in their code to work this way. It does suggest that `slow_add` is a bad thing to have in an API (although that was already fairly obvious). But in general you do get these issues cropping up.
Dynamic typing has its place, and many people really like it, myself included. Type inference (as in the Haskell family) solves the noise problem (for those who consider it a problem rather than something useful) and is elegant in itself, but just not the strictly superior thing that its advocates make it out to be. People still use Lisp family languages, and for good reason.
But maybe Steve Yegge would make the point better.
> This is a strange and aggressive bit of pedantry.
There's nothing pedantic about it. That's how Python works, and getting into the nuts and bolts of how Python works is precisely why the linked article makes type hinting appear so difficult.
> The entire point is that we have an intuition about what can be "added", but can't express it in the type system in any meaningful way.
As the post explores, your intuition is also incorrect. For example, as the author discovers in the process, addition via __add__/__radd__ is not addition in the algebraic field sense. There is no guarantee that adding types T + T will yield a T. Or that both operands are of the same type at all, as would be the case with "adding" a string and int. Or that A + B == B + A. We can't rely on intuition for type systems.
> your intuition is also incorrect.
No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense. When language designers are revisiting the decision to use the `+` symbol to represent string concatenation, how many of them are thinking about algebraic fields, seriously?
And all of this is exactly why you can't just say that it's universally bad API design to "accept all types". Because the alternative may entail rejecting types for no good reason. Again, dynamically typed languages exist for a reason and have persisted for a reason (and Python in particular has claimed the market share it has for a reason) and are not just some strictly inferior thing.
> you can't just say that it's universally bad API design to "accept all types"
Note, though, that that's not really the API design choice that's at stake here. Python will still throw an exception at runtime if you use the + operator between objects that don't support being added together. So the API design choice is between that error showing up as a runtime exception, vs. showing up as flagged by the type checker prior to runtime.
Or, to put it another way, the API design choice is whether or not to insist that your language provide explicit type definitions (or at least a way to express them) for every single interface it supports, even implicit ones like the + operator, and even given that user code can redefine such interfaces using magic methods. Python's API design choice is to not care, even with its type hinting system--i.e., to accept that there will be interface definitions that simply can't be captured using the type hinting system. I personally am fine with that choice, but it is a design choice that language users should be aware of.
> No, it definitionally isn't. The entire point is that `+` is being used to represent operations where `+` makes intuitive sense.
Huh? There's no restriction in Python's type system that says `+` has to "make sense".
> banana and mango smoothie> <Response [200]>
So we have Fruit + Fruit = Smoothie. Overly cute, but sensible from a CS101 OOP definition and potentially code someone might encounter in the real world, and demonstrates how not all T + T -> T. And we have Fruit + number = requests.Response. Complete nonsense, but totally valid in Python. If you're writing a generic method `slow_add` that needs to support `a + b` for any two types -- yes, you have to support this nonsense.
I guess that's the difference between the Python and the TypeScript approach here. In general, if something is possible, valid, and idiomatic in JavaScript, then TypeScript attempts to model it in the type system. That's how you get things like conditional types and mapped types that allow the type system to validate quite complex patterns. That makes the type system more complex, but it means that it's possible to use existing JavaScript patterns and code. TypeScript is quite deliberately not a new language, but a way of describing the implicit types used in JavaScript. Tools like `any` are therefore an absolute last resort, and you want to avoid it wherever possible.
When I've used Python's type checkers, I have more the feeling that the goal is to create a new, typed subset of the language, that is less capable but also easier to apply types to. Then anything that falls outside that subset gets `Any` applied to it and that's good enough. The problem I find with that is that `Any` is incredibly infective - as soon as it shows up somewhere in a program, it's very difficult to prevent it from leaking all over the place, meaning you're often back in the same place you were before you added types, but now with the added nuisance of a bunch of types as documentation that you can't trust.
> My suggestion -- don't rely on magic methods.
So no e.g. numpy or torch then?