It's interesting that, if this became commonplace, it could be much easier to get value out of poorly written books...
Some people have deep knowledge, but don't have the skills to untangle context and lay out the right learning path for a reader. These people likely bell-curve around certain neurotypes, which perhaps know certain sorts of knowledge more strongly.
Right now, those people shouldn't publish. But if LLMs could augment poorly structured content (not incorrect content, just poorly structured), that perhaps open up more people to share their wisdom.
Anyhow, just thinking out loud here. I'm sure there are some massive downsides that are coming to mind for ppl reading :)
Thanks for your comment. I can see how it could be used successfully for those types of books.