> Maybe i'm just old -- a cron job can fetch the info and push it to some notification service too, without also being a chaos agent.

Here's a concrete example: A web site showing after school activities for my kid's school. All the current ones end in March, and we were notified to keep a lookout for new activities.

So I told my OpenClaw instance to monitor it and notify me ONLY if there are activities beginning in March/April.

Now let's break down your suggestion:

> a cron job can fetch the info and push it to some notification service too, without also being a chaos agent.

How exactly is this going to know if the activity begins in March/April? And which notification service? How will it talk to it?

Sounds like you're suggesting writing a script and putting it in a cron job. Am I going to do that every time such a task comes up? Do I need to parse the HTML each time to figure out the exact locators, etc? I've done that once or twice in the past. It works, but there is always a mental burden on working out all those details. So I typically don't do it. For something like this, I wouldn't have bothered - I would have just checked the site every few days manually.

Here: You have 15 minutes. Go write that script and test it. Will you bother? I didn't think so. But with OpenClaw, it's no effort.

Oh, and I need to by physically near my computer to write the script.

Now the OpenClaw approach:

I tell it to do this while on a grocery errand. Or while in the office. I don't need to be home.

It's a 4 step process:

"Hey, can you go to the site and give me all the afterschool activities and their start dates?"

<Confirm it does that>

"Hey, write a skill that does that, and notifies me if the start date is ..."

"Hey, let's test the skill out manually"

<Confirm skill works>

"Hey, schedule a check every 10:30am"

And we're done.

I don't do this all at once. I can ask it to do the first thing, and forget about it for an hour or two, and then come back and continue.

There are a zillion scripts I could write to make my life easier that I'm not writing. The benefit of OpenClaw is that it now is writing them for me. 15 minutes * 1 zillion is a lot of time I've saved.

But as I said: Currently unreliable.

I agree with the sentiment that there are use cases for web scraping where an agent is preferable to a cron job, but I think your particular example can certainly be achieved with a cron job and a basic parser script. Just have Claude write it.

I didn't say it's not doable. I'm not even saying it's hard. But nothing beats telling Claw to do it for me while I'm in the middle of groceries.

Put another way: If it can do it (reliably), why on Earth would I babysit Claude to write it?

The whole point is this: When AI coding became a thing, many folks rediscovered the joy of programming, because now they could use Claude to code up stuff they wouldn't have bothered to. The barrier to entry went down. OpenClaw is simply that taken to the next level.

And as an aside, let's just dispense with parsing altogether! If I were writing this as a script, I would simply fetch the text of the page, and have the script send it to an LLM instead of parsing. Why worry about parsing bugs on a one-off script?

Scripts fail. Agents exfiltrate your data because someone hacked the school's website with prompt injections. Make sure it's a choice and not ignorance of the risks.

> Scripts fail.

Which is totally fine for the majority of tasks.

> Agents exfiltrate your data

They can only exfiltrate the data you give them. What's the worst that prompt injection attack will give them?

Container security is an entire subfield of infosec. For example: https://github.com/advisories/GHSA-w235-x559-36mg

People on both sides are just getting started finding all the ways to abuse or protect you from security assumptions with these tools. RSS is the right tool for this problem and I would be surprised if their CMS doesn't produce a feed on its own.

I don't use a container. I use a VM.

I'm not totally naive. I had the VM fairly hardened originally, but it proved to be inconvenient. I relaxed it so that processes on the VM can see other devices on the network.

There's definitely some risk to that.