I'm going to do the hyper-literal take of what are you working on literally today, since I'm always working on the same old project Marginalia Search and I have been for going on five years now.

* Integrating website liveness data into the crawler to make more informed decisions about whether to keep or wipe data from a website if it can't be reached while crawling

* Working out why the liveness data gathering process isn't stopping as scheduled.

* Deploying a voluntary max-charge request header for the commercial API

* Making website URL elements searchable. They should be already, but are not for some reason.

* Maybe looking into an intermittent stacktrace I get on one of the index partitions.

No blockers.