I built tools like this at several startups to copy production customer data onto a dev instance for the purpose of bug reproduction.

When I moved to big tech the rules against doing this were honestly one of the biggest drivers of reduced velocity I encountered. Many, many bugs and customer issues are very data dependent and can’t easily be reproduced without access to the actual customer data.

Obviously I get why the rules against data access like that exist and yes, many companies have ways to get consent for this access but it tends to be cumbersome and last-resortish. I think it’s under-appreciated how much it slows down the real world progress of fixing customer-reported issues.

it is understandable to a certain degree and it is entirely dependent on your company policy. however, with dbslice browser UI, you can audit every column and make sure nothing falls through the crack and get a signed-off config. once you do that you can just use that yaml file to do as many extractions as you need

the compliance profile + UI will be on the next release

Cool project.

I haven’t looked at the code too much(yet). I’d be curious to know how you’re handling some of the more wiry edge cases when it comes to following foreign key constraints. Things like circular dependencies come to mind. As well as complex joins.

I feel ok posting this because it’s archived, but this problem is basically what we designed for with Neosync [1]. It was probably the hardest feature to fully solve for the customers that needed it the most, which were the ones with the most complex data sets and foreign key dependencies.

To the point where it was almost impossible to do this, at least with syncing it directly to another Postgres database with everything in tact. Meaning that if on the other side you want another pg database that has all of the same constraints, it is difficult to ensure you got the full sliced dataset. At least the way we were thinking about it.

[1]: https://github.com/nucleuscloud/neosync

that is a valid point. dbslice finds cycles in the fk graph and usually resolves them by nulling a nullable fk for insert order, then patching it back with deferred updates after inserts. if a cycle has no nullable fks, postgres output can still work when deferred fk checking is enabled and the cycle constraints are deferrable, otherwise it fails fast with a clear error.

traversal automatically pulls in parent records so you don’t end up with dangling references, and a validator (enabled by default) can double-check the slice before output. for complex joins, you can opt into subqueries in seed where clauses.

it covers a lot of messy cases, but i won’t claim it’s fully solved yet. there’s no automatic discovery of relationships that only exist in app code (beyond heuristic hints), and real production schemas will still surface new edge cases. it’s still early-stage, so the more people test it on messy production-like datasets, the faster i can iron those out.

i would also love to hear what you think of the implementation if you check out the code.

Copying production data to dev is widely regarded as being a bit of a bad idea, if the data contains any information that relates to a person or real life entity.

Uncontrolled access, inability to comply with "right to be forgotten" legislation, visibility of personal information, including purchases, physical locations, etc etc.

Of course sales, trading, inventory, etc data, even with no customer info is still valuable.

Attempts to anonymise are often incomplete, with various techniques to de-anonymise available.

Database separation, designed to make sure that certain things stay in different domains and cant be combined, also falls apart if you have both the databases on your laptop.

Of course, any threat actor will be happy that prod data is available in dev environments, as security is often much lower in dev environments.

Caveat emptor.

I made one of these, however I still have to solve the PII issues convince the data custodians that it's safe to use.

this is the main reason i started doing the compliance profiles for. you can choose a compliance like hippa and dbslice auto-applies masking rules, scans the output for residual PII, and generates an audit manifest your data custodian can review.

if you want to see what is being masked before anything runs, i also have a browser UI where you can review every table and column. see which fields each compliance profile covers, adjust mappings as much as you want(select what columns to anonymize and how), and export the config

both will be out in the next release.

Sounds fantastic.

This is extremely valuable. Every time we get a problem which we are not able to reproduce, usually an extreme edge case, we end up getting our entire production DB replicated to get to the error.

I'll surely try this. Thanks for posting it here.

[dead]