The value of a library is not just that it does a thing you want, but that it doesn’t do all the things you’d prefer it didn’t.
It’s easy to write a cookie parser for a simple case; clearly your robot was able to hand you one for millidollars. How confident are you that you’ve exhaustively specified the exact subset of situations your code is going to encounter, so the missing functionality doesn’t matter? How confident are you that its implementation doesn’t blow up under duress? How many tokens do you want to commit to confirming that (reasoning, test, pick your poison)?
My true opinion is that your questions posed here, may become irrelevant in the face of super intelligence
We should probably not be making decisions based on the assumption that super intelligence is coming.
If it does come, nothing we do now really matters. If it doesn’t come and we cash a bunch of checks assuming it will, we are screwed.
I mean ASI could just generate the pixels of a webpage at 60hz. And who needs cookies if ASI knows who you are after seeing half a frame of the dust on your wall, and then knows all the pixels and all the clicks that have transpired across all your interactions with its surfaces. And who needs webpages when ASI can conjure anything on a screen and its robots and drones can deliver any earthly desire at the cost of its inputs. That is if we’re not all made into paper clips or dead fighting for control of the ASML guild at that point.
I say all that mostly in jest, but to take your point somewhat seriously: the most accurate thing you’ve said is that no one is ready for super intelligence. What a great and terrible paroxysm it would be.