I use the SingleFile extension to archive every page I visit.
It's easy to set up, but be warned, it takes up a lot of disk space.
$ du -h ~/archive/webpages
1.1T /home/andrew/archive/webpages
https://github.com/gildas-lormeau/SingleFileI use the SingleFile extension to archive every page I visit.
It's easy to set up, but be warned, it takes up a lot of disk space.
$ du -h ~/archive/webpages
1.1T /home/andrew/archive/webpages
https://github.com/gildas-lormeau/SingleFile
How do you manage those? Do you have a way to search them, or a specific way to catalogue them, which will make it easy to find exactly what you need from them?
KaraKeep is a decent self hostable app that has support for receiving singlefile pages via singlefile browser extension and pointing to karakeep API. This allows me to search for archived pages. (Plus auto summarization and tagging via LLM).
Very naive question, surely. What does KaraKeep provide that grep doesn't?
jokes aside. It has a mobile app
I don't get it aside. How does that help him search files on his local file system? Or is he syncing an index of his entire web history to his mobile device?
GP is using SingleFile browser extension. Which allows him to download the entire page as a single .html file. But SingleFile also allows sending that page to Karakeep directly instead of downloading it to his local file system. (if he's hosting karakeep on a NAS on his network). He can then use the mobile app or Karakeep web UI to search and view that archived page. Karakeep does the indexing. (Including auto-tagging via LLM)
I see now, thank you.
storage is cheap, but if you wanted to improve this:
1. find a way to dedup media
2. ensure content blockers are doing well
3. for news articles, put it through readability and store the markdown instead. if you wanted to be really fancy, instead you could attempt to programatically create a "template" of sites you've visited with multiple endpoints so the style is retained but you're not storing the content. alternatively a good compression algo could do this, if you had your directory like /home/andrew/archive/boehs.org.tar.gz and inside of the tar all the boehs.org pages you visited are saved
4. add fts and embeddings over the pages
1 and partly 3 - I use btrfs with compression and deduping for games and other stuff. Works really well and is "invisible" to you.
dedup on btrfs requires to setup a cronjob. And you need to pick one of the dedup too. It's not completely invisible in my mind bwcause of this ;)
>storage is cheap
It is. 1.1TB is both:
- objectively an incredibly huge amount of information
- something that can be stored for the cost of less than a day of this industry's work
Half my reluctance to store big files is just an irrational fear of the effort of managing it.
> - something that can be stored for the cost of less than a day of this industry's work
Far, far less even. You can grab a 1TB external SSD from a good name for less than a days work at minimum wage in the UK.
I keep getting surprised at just how cheap large storage is every time I need to update stuff.
Thanks. I didn't know about this and it looks great.
A couple of questions:
- do you store them compressed or plain?
- what about private info like bank accounts or health issuance?
I guess for privacy one could train oneself to use private browsing mode.
Regarding compression, for thousands of files don't all those self-extraction headers add up? Wouldn't there be space savings by having a global compression dictionary and only storing the encoded data?
> do you store them compressed or plain?
Can’t speak to your other issues but I would think the right file system will save you here. Hopefully someone with more insight can provide color here, but my understanding is that file systems like ZFS were specifically built for use cases like this where you have a large set of data you want to store in a space efficient manner. Rather than a compression dictionary, I believe tech like ZFS simply looks at bytes on disk and compresses those.
By default, singlefile only saves when you tell it to, so there's no worry about leaking personal information.
I haven't put the effort in to make a "bookmark server" that will accomplish what singlefile does but on the internet because of how well singlefile works.
i was considering a similar setup, but i don’t really trust extensions. Im curious;
- Do you also archive logged in pages, infinite scrollers, banking sites, fb etc? - How many entries is that? - How often do you go back to the archive? is stuff easy to find? - do you have any organization or additional process (eg bookmarks)?
did you try integrating it with llms/rag etc yet?
You can just fork it, audit the code, add your own changes, and self host / publish.
Are you automating this in some fashion? Is there another extension you've authored or similar to invoke SingleFile functionality on a new page load or similar?
Have you tried MHTML?
SingleFile is way more convenient as it saves to a standard HTML file. The only thing I know that easily reads MHTML/.mht files is Internet Explorer.
Chrome and Edge read them just fine? The format is actually the same as .eml AFAIK.
I remember having issues but it could be because the .mht's I had were so old I think I used Internet Explorer's Save As... function to generate them.
I've had such issues with them in the past too, yeah. I never figured out the root cause. But in recent times I haven't had issues, for whatever that's worth. (I also haven't really tried to open many of the old files either.)
You must have several TB of the internet on disk by now...