I somewhat consistently use notebookLM for podcasts of academic papers I'm reading in my PhD. You have to go read it yourself afterwards but it makes better use of time in the gym or doing dishes/groceries.

> You have to go read it yourself afterwards

^ this is important.

Otherwise you may very well be missing anything really surprising or novel.

See for example https://www.programmablemutter.com/p/after-software-eats-the... , an experience report of NotebookLM where

> It was remarkable to see how many errors could be stuffed into 5 minutes of vacuous conversation. What was even more striking was that the errors systematically pointed in a particular direction. In every instance, the model took an argument that was at least notionally surprising, and yanked it hard in the direction of banality.

On one hand 2024 in AI time was a decade ago.

On the other, Google might not have done much to upgrade the podcast feature since them.

This regression towards the mean is still very much a feature of the newer models, in my experience. I don't see how a model that predicts the most likely word based on previous context + corpus data could possibly not have some bias towards non-novelty / banality.

It’s gotten somewhat better over time though clearly not their top priority.

I found notebookLM to consistently make up about 20% of it's summary. Entertaining but unreliable.

I used it most key to learn about history. There isn’t much damage if it got 1600s or 1700s detail wrong. My high school teachers got much of it wrong too.

The bantering of the podcast I found distracting and the breathless enthusiasm. I guess there was a way to make it more no nonsense? I found I lost content if tuned for brevity.

I just use elevenreader for this. I copy in essays or whatever text I want to listen to and it works decently well. It's far from perfect, but certainly good enough.

Sometimes I'll take deep research output and listen to it too that way.

I tell them “no idle conversation or verbal tics” in the instructions.

I've found notebookLM summaries to be too high-level and oversimplified to be useful. Hopefully in a few years they can go deeper.

You can alao use NotebookLM's as source for Gemini app and ask it to do more in-depth summaries with custom prompting.

This somewhat makes whole NotrbookLM less useful, but still.

I also like doing that for topics that I am tangentially interested in. One minor thing that I find annoying is that the narrators switch roles in the middle of conversation. They start with the female voice explaining a concept to the male voice and suddenly they switch. In the meantime I have identified myself with the voice being explained to.

> You have to go read it yourself afterwards

Or before! Either is mandatory to actually learn the content.

[deleted]

Just listen to actual audio books... literally doing double the work for no benefit... why?

There aren't a lot of highly technical audiobooks or ones that give the same specificity that would be the same as an academic paper

Okay but the user is describing listening to papers, then having to read the papers because listening to them isn't efficient. So why bother listening to it in the first place if you're going to read it?

Not yet but it seems like they're getting to the point of AI narration finally being good enough to make any text an 'audiobook'.

Having said that I absolutely hate the audio format, I only used it when I had to drive or when I swam lanes. But these days I do neither.

No, reading verbatim from a technical paper is way too dense. You need a lot of filler words to slow it down and repetition to make it stick when read aloud.

Hmm fair enough but text manipulation is exactly something where LLMs do shine. Writing and modifying text is what they were meant for.

Ps I don't mean the word 'manipulation' in a negative context.

Writing a book takes like 2-3 years on average. Papers are published everyday. Having a cute two-person "conversational chat" w/ audio works for a lot of people vs. just reading a paper. "No benefit" to you perhaps. Don't generalize the lived experience.

Okay but this person is literally saying that listening with LLM tools isn't helping their understanding and they have to still read the paper... why listen at this point? Why listen using a tool that literally causes you to do more work?

We all have the same amount of time on this Earth, saying how great a tool is that is causing you to do more work is just... weird?

I'd personally never do this, I value my time.

It can synthesize and summarize many topics.

For example, I can give it 8 papers on best practices in online marketing, it will turn it into a 20 minute podcast.

There are errors, but also with real podcasters.