Already considered in my comment.
> Imagine that there's a little computer inside each neuron that decides when it needs to do work
Yah, there's -bajillions- of floating point operation equivalents happening in a neuron deciding what to do. They're probably not all functional.
BUT, that's why I said the "useful parts" of the decision:
It may take more than the equivalent of one floating point operation to decide whether to fire. For instance, if you are weighting multiple inputs to the neuron differently to decide whether to fire now, that would require multiple multiplications of those inputs. If you consider whether you have fired recently, that's more work too.
Neurons do all of these things, and more, and these things are known to be functional-- not mere implementation details. A computer cannot make an equivalent choice in one floating point operation.
Of course, this doesn't mean that the brain is optimal-- perhaps you can do far less work. But if we're going to use it as a model to estimate scale, we have to consider what actual equivalent work is.
I see. Do you think this is what Kurzweil was accounting for when he multiplied by 1000 connections?
Yes, but it probably doesn't tell the whole story.
There's basically a few axes you can view this on:
- Number of connections and complexity of connection structure: how much information is encoded about how to do the calculations.
- Mutability of those connections: these things are growing and changing -while doing the math on whether to fire-.
- How much calculation is really needed to do the computation encoded in the connection structure.
Basically, brains are doing a whole lot of math and working on a dense structure of information, but not very precisely because they're made out of meat. There's almost certainly different tradeoffs in how you'd build the system based on the precision, speed, energy, and storage that you have to work with.