Like with any new tool/technology, you have to try it. And even then the benefits won't be obvious to you until you've played with it for a few days/weeks. With LLMs in general, it took me months before I found real good use cases.

Simple example: I tell (with my voice) my OpenClaw instance to monitor a given web site daily and ping me whenever a key piece of information shows up there.

The real problem is that it is fairly unreliable. It would often ping me even when the information had not shown up.

Another example: I'm particular about the weather related information I want, and so far have not found any app that has everything. I got sick of going to a particular web site, clicking on things, to get this information. So I created a Skill to get what I need, and now I just ask for it (verbally), and I get it.

As the GP said. This is what Siri etc should have been.

> Simple example: I tell (with my voice) my OpenClaw instance to monitor a given web site daily and ping me whenever a key piece of information shows up there.

Maybe i'm just old -- a cron job can fetch the info and push it to some notification service too, without also being a chaos agent. It seems I spend the security cost here, and in return i can save 15 minutes writing a script. Juice doesn't seem to be worth the squeeze.

But they don't just want the text of the website pushed as a notification every day. They want the bot to load the site, likely perform some kind of interaction, decide if the thing they're looking for is there, and then notify them.

All of which can already be done programmatically without OpenClaw.

Not with a single prompt.

> Maybe i'm just old -- a cron job can fetch the info and push it to some notification service too, without also being a chaos agent.

Here's a concrete example: A web site showing after school activities for my kid's school. All the current ones end in March, and we were notified to keep a lookout for new activities.

So I told my OpenClaw instance to monitor it and notify me ONLY if there are activities beginning in March/April.

Now let's break down your suggestion:

> a cron job can fetch the info and push it to some notification service too, without also being a chaos agent.

How exactly is this going to know if the activity begins in March/April? And which notification service? How will it talk to it?

Sounds like you're suggesting writing a script and putting it in a cron job. Am I going to do that every time such a task comes up? Do I need to parse the HTML each time to figure out the exact locators, etc? I've done that once or twice in the past. It works, but there is always a mental burden on working out all those details. So I typically don't do it. For something like this, I wouldn't have bothered - I would have just checked the site every few days manually.

Here: You have 15 minutes. Go write that script and test it. Will you bother? I didn't think so. But with OpenClaw, it's no effort.

Oh, and I need to by physically near my computer to write the script.

Now the OpenClaw approach:

I tell it to do this while on a grocery errand. Or while in the office. I don't need to be home.

It's a 4 step process:

"Hey, can you go to the site and give me all the afterschool activities and their start dates?"

<Confirm it does that>

"Hey, write a skill that does that, and notifies me if the start date is ..."

"Hey, let's test the skill out manually"

<Confirm skill works>

"Hey, schedule a check every 10:30am"

And we're done.

I don't do this all at once. I can ask it to do the first thing, and forget about it for an hour or two, and then come back and continue.

There are a zillion scripts I could write to make my life easier that I'm not writing. The benefit of OpenClaw is that it now is writing them for me. 15 minutes * 1 zillion is a lot of time I've saved.

But as I said: Currently unreliable.

I agree with the sentiment that there are use cases for web scraping where an agent is preferable to a cron job, but I think your particular example can certainly be achieved with a cron job and a basic parser script. Just have Claude write it.

I didn't say it's not doable. I'm not even saying it's hard. But nothing beats telling Claw to do it for me while I'm in the middle of groceries.

Put another way: If it can do it (reliably), why on Earth would I babysit Claude to write it?

The whole point is this: When AI coding became a thing, many folks rediscovered the joy of programming, because now they could use Claude to code up stuff they wouldn't have bothered to. The barrier to entry went down. OpenClaw is simply that taken to the next level.

And as an aside, let's just dispense with parsing altogether! If I were writing this as a script, I would simply fetch the text of the page, and have the script send it to an LLM instead of parsing. Why worry about parsing bugs on a one-off script?

Scripts fail. Agents exfiltrate your data because someone hacked the school's website with prompt injections. Make sure it's a choice and not ignorance of the risks.

> Scripts fail.

Which is totally fine for the majority of tasks.

> Agents exfiltrate your data

They can only exfiltrate the data you give them. What's the worst that prompt injection attack will give them?

Container security is an entire subfield of infosec. For example: https://github.com/advisories/GHSA-w235-x559-36mg

People on both sides are just getting started finding all the ways to abuse or protect you from security assumptions with these tools. RSS is the right tool for this problem and I would be surprised if their CMS doesn't produce a feed on its own.

I don't use a container. I use a VM.

I'm not totally naive. I had the VM fairly hardened originally, but it proved to be inconvenient. I relaxed it so that processes on the VM can see other devices on the network.

There's definitely some risk to that.