I was on a big team working on a giant oracle database over 25-years ago. I dont remember the term but each developer had their own playground of the giant database that wasn't affected by anyone else. The DB admin would set it up for each developer in just a few minutes so it definitely wasn't a copy. Then when a developer needed to reset and go back to the original db again it just took a few minutes. I just don't remember what it's called but I think Postgres has had it now for a few years.

You don't actually need to physically copy data, just create a view for every table that does a replacing merge between the original read-only data and the developer's own copy. And you can put a trigger on the view to redirect writes to the same private-copy table, making the whole thing transparent to the user.

Not disputing that Oracle might have had something like this built-in, but it sounds like something that I could have whipped up in a day or so as a custom solution. I actually proposed a similar system to create anonymized datasets for researchers when I worked at a national archive institute.

Snowflake uses a similar system with their 0-copy cloning. It starts with the original table's partition, and keeps track of the delta created by subsequent operations. Always found that builtin mechanism pretty neat!

Sounds like a snapshot - a file based diff of the pages changed since the last full backup - easy to revert to for the same reasons.