The difference between human and artificial intelligence (whatever "intelligence" is) is in the following: - AI is COMPLICATED (e.g. the World's Internet) yet it is REDUCIBLE and it is COUNTABLE (even if infinite) - Human intelligence is COMPLEX; it is IRREDUCIBLE (and it does not need to be large; 3 is a good number for a complex system) - AI has a chance of developing useful tools and methods and will certainly advance our civilization; it should not, however, be confused with intelligence (except by persons who do not discern complicated from complex) - Everything else is poppycock
Very good point.
I in fact had thought of describing the problem from a systems theoretical perspective as this is another way to combine different paths into a common principle
That was a sketch, in case you are into these kind of approaches:
2. Complexity vs. Complication In systems theory, the distinction between 'complex' and 'complicated' is critical. Complicated systems can be decomposed, mapped, and engineered. Complex systems are emergent, self-organizing, and irreducible. Algorithms thrive on complication. But general intelligence—especially artificial general intelligence (AGI)—must operate in complexity. Attempting to match complex environments through increased complication (more layers, more parameters) leads not to adaptation, but to collapse. 3. The Infinite Choice Barrier and Entropy Collapse In high-entropy decision spaces, symbolic systems attempt to compress possibilities into structured outcomes. But there is a threshold—empirically visible around entropy levels of H ≈ 20 (one million outcomes)—beyond which compression fails. Adding more depth does not resolve uncertainty; it amplifies it. This is the entropy collapse point: the algorithm doesn't fail because it cannot compute. It fails because it computes itself into divergence. 4. The Oracle and the Zufallskelerator To escape this paradox, the system would need either an external oracle (non-computable input), or pure chance. But chance is nearly useless in high-dimensional entropy. The probability of a meaningful jump is infinitesimal. The system becomes a closed recursion: it must understand what it cannot represent. This is the existential boundary of algorithmic intelligence: a structural self-block. 5. The Organizational Collapse of Complexity The same pattern is seen in organizations. When faced with increasing complexity, they often respond by becoming more complicated—adding layers, processes, rules. This mirrors the AI problem. At some point, the internal structure collapses under its own weight. Complexity cannot be mirrored. It must either be internalized—by becoming complex—or be resolved through a radically simpler rule, as in fractal systems or chaos theory.
6. Conclusion: You Are an Algorithm An algorithmic system can only understand what it can encode. It can only compress what it can represent. And when faced with complexity that exceeds its representational capacity, it doesn't break. It dissolves. Reasoning regresses to default tokens, heuristics, or stalling. True intelligence—human or otherwise—must either become capable of transforming its own frame (metastructural recursion), or accept the impossibility of generality. You are an algorithm. You compress until you can't. Then you either transform, or collapse
Do you have any proof or at least evidence for these assertions?