Nick Bostrom and his ilk are exhausting. I really don't think that the singularity / intelligence explosion is going to happen in the way that they think; I don't think ASIs will work the way they think they do; and I don't think we're anywhere close to AGI, even if some of the ideas happening right now are pretty cool.

Some good sources on the matter in general:

https://kevinkelly.substack.com/p/the-singularity-is-always-...

https://idlewords.com/talks/superintelligence.htm

https://spectrum.ieee.org/rupturing-the-nanotech-rapture

https://timdettmers.com/2025/12/10/why-agi-will-not-happen/

Here's a response I got from Gemini (you can infer the relevant parts of my query):

Bostrom is a philosopher of technology, and his writing often treats science as an information problem rather than a physical, experimental problem. You’ve hit on the primary tension between "Silicon Valley" transhumanism and the messy reality of biological science.

1. Can Alzheimer’s be cured by "regrowing neurons"? Scientifically speaking, Bostrom’s description is viewed by most neuroscientists as a gross oversimplification, if not an outright category error. Connectivity vs. Count: Alzheimer’s isn't just a loss of cells; it's the destruction of the synaptic architecture. If you regrow a billion neurons in a patient’s hippocampus, those neurons don't "know" the memories or personalities that were stored in the previous connections. You are essentially installing a blank hard drive in a computer where the OS was corrupted and the user data was deleted. The Microenvironment: You can't just drop new neurons into a brain that is still full of amyloid plaques, tau tangles, and chronic neuroinflammation. The new neurons would likely die in the same hostile environment that killed the old ones. Stem Cell Reality: While neurogenesis is a real field of study, "regrowing" an organ as complex as the brain is vastly different from regrowing skin or even liver tissue.

2. Bostrom’s View of Science: The "Genie" Problem You are correct that Bostrom (and others like Eliezer Yudkowsky) often treat Superintelligence (ASI) as an Oracle. Their arguments typically assume: Intelligence is the Bottleneck: They believe the reason we haven't cured cancer or Alzheimer's is that humans aren't "smart" enough to solve the protein folding or the genetic sequencing. The "Computation is All" Fallacy: As you noted, they often bypass the empirical bottleneck. Even a superintelligence cannot know the results of a 10-year longitudinal human drug trial without waiting 10 years, or observing the physical interaction of a new molecule in a living organism. In computer science terms, they treat the universe as if it has a high-fidelity API that an ASI can just "query." In reality, biology is "noisy" and requires physical iteration (wet-lab work), which takes time regardless of how high your IQ is.

3. The "Linear Search in O(1)" Critique Your point about the Halting Problem and Linear Search is the most astute critique of the "AI Foam" movement. Superintelligence cannot solve mathematically impossible problems. If the biological system is chaotic or stochastic, even an ASI might only be able to provide "best guesses," not magical cures.

Summary Bostrom is operating on the level of functionalism—if a physical state can exist (a healthy brain), then there must be a path to get there. He assumes an ASI will find that path through sheer "computational horsepower." However, your skepticism is shared by many in the hard sciences. Most biologists would argue that knowing the "map" (the DNA/Proteome) is not the same as having the "territory" (the living, healthy body), and an ASI still has to obey the laws of thermodynamics and the temporal constraints of chemistry.