I felt the better takeaway from this was that it's impossible to know for certainty how long this will or will not continue regardless of the data or models you're using, because if you (or anyone else) could predict that accurately they'd be one of the richest people on the planet.
I don't know when (or if) AI will implode or succeed with any degree of provable certainty, because that's not my area of expertise. Rather, I can point out and discuss flaws in the common booster and doomer arguments, and identify problems neither side seems willing to discuss. That brings me cold comfort, but it's not enough to stake my money on one direction or another with any degree of certainty - thus I limit my exposure to specific companies, and target indices or funds that will see uplift if things go well, or minimize losses if things go pear-shaped.
I also think relying on such mathematics to justify a position in the first place is kind of silly, especially for technical people. Mathematical models work until they don't, at which point entirely new models must be designed to capture our new knowledge. On the other hand, logical arguments are more readily adapted to new data, and represent critical, rather than mathematical, thinking and reasoning.
Saying AI is going boom/bust because of sigmoids or Lindy's Law or whathaveyou is not an argument, it's an excuse. The real argument is why those things may or may not emerge, and how do we address their consequences within areas inside and outside of AI through regulation, innovation, or policy.
I think his agenda here is to point out that your probability distribution for AI outcomes should be broad (what you said), but most importantly: this means you must take seriously the possibility that we are gonna get superintelligence quite soon.
Basically a lot of people say "but isn't it also pretty likely that we DON'T get superintelligence?" And, yes, it is. But superintelligence being even a remotely plausible outcome is a big fucking deal. Your investment choices in that context are not important.
People really struggle to think rationally in the face of this shape of uncertainty.