I've built something similar before for my own use cases and one thing I'd push back on are official subtitles. Basically no video I care about has ever had "official" subtitles and the auto generated subtitles are significantly worse than what you get by piping content through an LLM. I used Gemini because it was the cheapest option and still did very well.
The biggest challenge with this approach is that you probably need to pass extra context to LLMs depending on the content. If you are researching a niche topic, there will be lots of mistakes if the audio isn't if high quality because that knowledge isn't in the LLM weights.
Another challenge is that I often wanted to extract content from live streams, but they are very long with lots of pauses, so I needed to do some cutting and processing on the audio clips.
In the app I built I would feed an RSS feed of video subscriptions in, and at the other end a fully built website with summaries, analysis, and transcriptions comes out that is automatically updated based on the youtube subscription rss feed.
This is amazing feedback, thanks for sharing your deep experience with this problem space. You've clearly pushed past the 'download' step into true content analysis.
You've raised two absolutely critical architectural points that we're wrestling with:
Official Subtitles vs. LLM Transcription: You are 100% correct about auto-generated subs being junk. We view official subtitles as the "trusted baseline" when available (especially for major educational channels), but your experience with Gemini confirms that an optimized LLM-based transcription module is non-negotiable for niche, high-value content. We're planning to introduce an optional, higher-accuracy LLM-powered transcription feature to handle those cases where the official subs don't exist, specifically addressing the need to inject custom context (e.g., topic keywords) to improve accuracy on technical jargon.
The Automation Pipeline (RSS/RAG): This is the future. Your RSS-to-Website pipeline is exactly what turns a utility into a Research Engine. We want YTVidHub to be the first mile of that process. The challenges you mentioned—pre-processing long live stream audio—is exactly why our parallel processing architecture needs to be robust enough to handle the audio extraction and cleaning before the LLM call.
I'd be genuinely interested in learning more about your approach to pre-processing the live stream audio to remove pauses and dead air—that’s a huge performance bottleneck we’re trying to optimize. Any high-level insights you can share would be highly appreciated!
For the long videos I just relied in ffmpeg to remove silence. It has lots of options for it, but you may need to fiddle with the parameters to make it work. I ended up with something like:
``` stream = ffmpeg.filter( stream, 'silenceremove', detection='rms', start_periods=1, start_duration=0, start_threshold='-40dB', stop_periods=-1, stop_duration=0.15, stop_threshold='-35dB', stop_silence=0.15 ) ```