I'm not sure if I would call it "abstracting."
Imagine that you have an a spreadsheet that dates from the beginning of the universe to its end. It contains two columns: the date, and how many days it has been since the universe was born. That's very big spreadsheet with lots of data in it. If you plot it, it creates a seemingly infinite diagonal line.
But it can be "abstracted" as Y=X. And that's what ML does.
That's literally what generalization is.
I don't think it's the same thing because an abstraction is still tangible. For example, "rectangle" is an abstraction for all sorts of actual rectangular shapes you can find in practice. We have a way to define what a rectangle is and to identify one.
A neural network doesn't have any actual conceptual backing for what it is doing. It's pure math. There are no abstracted properties beyond the fact that by coincidence the weights make a curve fit certain points of data.
If there was truly a conceptual backing for these "abstractions" then multiple models trained on the same data should have very similar weights as there aren't multiple ways to define the same concepts, but I doubt that this happens in practice. Instead the weights are just randomly adjusted until they fit the points of data without any respect given to whether there is any sort of cohesion. It's just math.
That's like saying multiple programs compiled by different compilers from the same sources should have very similar binaries. You're looking in the wrong place! Similarities are to be expected in the structure of the latent space, not in model weights.