Actually I would have agreed with you 2 years ago. But now working with AI so much, maybe RSS "is" just the thing we need for some of the distrobution.
I'd be happy if AI would disappear, but I quite agree with the prior comment - AI is awful but RSS isn't too terribly useful for many of us either. It depends on the individual of course, some people love using RSS feeds. I don't use them. I find RSS not useful.
RSS is dead because it’s backwards. It requires everyone you want to follow to implement it since that is the best we could do a decade ago.
We can do better than that: an LLM can ingest unstructured data and turn it into a feed. You shouldn’t need someone else to comply with a protocol just to ingest their data.
I don’t get why people keep fantasizing about a system that gave consumers no control. Scrape the website directly. You decide what’s in the feed, not them.
> an LLM can ingest unstructured data and turn it into a feed.
An LLM can try to do that, yes. But LLMs are lossy compression. RSS feeds are accurate, predictable, and follow a pre-defined structure. Using LLMs to ingest data which can easily be turned into an parseable data structure seems strange: use the LLM to do the "next part" of the formula (comprehension, decision making, etc)
I mean that your RSS feed can basically be "Go to https://techcrunch.com/latest/ and use each non-video item as a feed item" or "Go to x.com/some_user and make each tweet a feed item", and the LLM can do a perfect extraction of links from html response blobs.
The only thing you have to do is ensure it can reliably get the response html. Maybe MCP browser + proxy or mirror to seem more human.
I built this for myself. The idea is that each feed is a url + title + a prompt to tell the LLM how to extract the links you want so that it generalizes over all websites.
And each feed item is a canonicalized url + title + a local copy of the content at that url which is an improvement over RSS since so many RSS feeds don't even contain the content.
I imagine a reasonably intelligent coding agent would notice that an RSS feed already exists and use it. Possibly transformed if it's not quite the format you want?
I set it up a year or two ago. Now i ready 90 of articles and news through it.
Actually I would have agreed with you 2 years ago. But now working with AI so much, maybe RSS "is" just the thing we need for some of the distrobution.
I'd be happy if AI would disappear, but I quite agree with the prior comment - AI is awful but RSS isn't too terribly useful for many of us either. It depends on the individual of course, some people love using RSS feeds. I don't use them. I find RSS not useful.
RSS is dead because it’s backwards. It requires everyone you want to follow to implement it since that is the best we could do a decade ago.
We can do better than that: an LLM can ingest unstructured data and turn it into a feed. You shouldn’t need someone else to comply with a protocol just to ingest their data.
I don’t get why people keep fantasizing about a system that gave consumers no control. Scrape the website directly. You decide what’s in the feed, not them.
> an LLM can ingest unstructured data and turn it into a feed.
An LLM can try to do that, yes. But LLMs are lossy compression. RSS feeds are accurate, predictable, and follow a pre-defined structure. Using LLMs to ingest data which can easily be turned into an parseable data structure seems strange: use the LLM to do the "next part" of the formula (comprehension, decision making, etc)
There is also LLMs.txt https://llmstxt.org/ eg https://joshua.hu/llms.txt / https://joshua.hu/llms-full.txt
I mean that your RSS feed can basically be "Go to https://techcrunch.com/latest/ and use each non-video item as a feed item" or "Go to x.com/some_user and make each tweet a feed item", and the LLM can do a perfect extraction of links from html response blobs.
The only thing you have to do is ensure it can reliably get the response html. Maybe MCP browser + proxy or mirror to seem more human.
I built this for myself. The idea is that each feed is a url + title + a prompt to tell the LLM how to extract the links you want so that it generalizes over all websites.
And each feed item is a canonicalized url + title + a local copy of the content at that url which is an improvement over RSS since so many RSS feeds don't even contain the content.
I imagine a reasonably intelligent coding agent would notice that an RSS feed already exists and use it. Possibly transformed if it's not quite the format you want?
LLMs use up tons of energy and water.
That is the use case for predicting that RSS will dominate tomorrow?
It’s still happening.
I was never an RSS user until half a year ago. Now that’s my only way of browsing my choice of (tech) news sources and blogs.
I've been using RSS daily since 2008 (on feedly since 2013)
I came here via RSS.