I would claim that:

(interactive labs + quizzes) > Learning from books

Good online documentation > 5yr old tome on bookshelf

chat/search with ai > CTRL+F in a PDF manual

Most of what you claim as being better does not address how people can discover concepts of which they are previously aware. To wit:

  One cannot complete "labs + quizzes" unless they know
  how to answer same.

  One cannot "Ctrl-F in a PDF manual" unless they know
  what to search for.
As to online docs being better than a printed "5yr old tome on bookshelf", that depends on if the available online documentation subsumes the book. If it does, awesome, but if it doesn't, then there very likely are things to learn within reach of said bookshelf.

EDIT:

An exemplar to consider is how the Actor Model[0] can be used to define a FaaS[1]-based system. Without being aware of this paper, it is unrealistic to expect someone to be able to formulate LLM prompts incorporating concepts identified by same.

Side note: the Actor Model[0] paper is far older than a "5yr old tome" and is very much applicable to this day.

0 - https://dspace.mit.edu/bitstream/handle/1721.1/41962/AI_WP_1...

1 - https://en.wikipedia.org/wiki/Function_as_a_service

Interactive labs can do a great job of teaching skills, but they fell short of teaching understanding. And at some point, it’s faster to read a book to learn, because there’s a reduced need for practice.

Hypertext is better than printed book format, but if you’re just starting with something you need a guide that provides a coherent overview. Also most online documentation are just bad.

Why ctrl+f? You can still have a table of contents and an index with pdf. And the pdf formats support link. And I’d prefer filtering/querying over generation because the latter is always tainted by my prompt. If I type `man unknown_function`, I will get an error, not a generated manual page.