If you try to reproduce various open datasets like fineweb by scraping the pages again, you can't, because a lot of the pages no longer exist. That's why you would prefer to store them instead of losing the content forever.
It's not "all of the text", it's like less than 100 trillion tokens, which means less than 400TB assuming you don't bother to run the token streams through a general purpose compression algorithm before storing them.