I was looking for some code, or a product they made, or anything really on their site.
The only github I could find is: https://github.com/strongdm/attractor
Building Attractor
Supply the following prompt to a modern coding agent
(Claude Code, Codex, OpenCode, Amp, Cursor, etc):
codeagent> Implement Attractor as described by
https://factory.strongdm.ai/
Canadian girlfriend coding is now a business model.Edit:
I did find some code. Commit history has been squashed unfortunately: https://github.com/strongdm/cxdb
There's a bunch more under the same org but it's years old.
There's actual code in this repo: https://github.com/strongdm/cxdb
I've looked at their code for a few minutes in a few files, and while I don't know what they're trying to do well enough to say for sure anything is definitely a bug, I've already spotted several things that seem likely to be, and several others that I'd class as anti-patterns in rust. Don't get me wrong, as an experiment this is really cool, but I do not think they've succeeded in getting the "dark factory" concept to work where every other prominent attempt has fallen short.
Out of interest, what anti-patterns did you see?
(I'm continuing to try to learn Rust!)
To pick a few (from the server crate, because that's where I looked):
- The StoreError type is stringly typed and generally badly thought out. Depending on what they actually want to do, they should either add more variants to StoreError for the difference failure cases, replaces the strings with a sub-types (probably enums) to do the same, or write a type erased error similar to (or wrapping) the ones provided by anyhow, eyre, etc, but with a status code attached. They definitely shouldn't be checking for substrings in their own error type for control flow.
- So many calls to String::clone [0]. Several of the ones I saw were actually only necessary because the function took a parameter by reference even though it could have (and I would argue should have) taken it by value (If I had to guess, I'd say the agent first tried to do it without the clone, got an error, and implemented a local fix without considering the broader context).
- A lot of errors are just ignored with Result::unwrap_or_default or the like. Sometimes that's the right choice, but from what I can see they're allowing legitimate errors to pass silently. They also treat the values they get in the error case differently, rather than e.g. storing a Result or Option.
- Their HTTP handler has an 800 line long closure which they immediately call, apparently as a substitute for the the still unstable try_blocks feature. I would strongly recommend moving that into it's own full function instead.
- Several ifs which should have been match.
- Lots of calls to Result::unwrap and Option::unwrap. IMO in production code you should always at minimum use expect instead, forcing you to explain what went wrong/why the Err/None case is impossible.
It wouldn't catch all/most of these (and from what I've seen might even induce some if agents continue to pursue the most local fix rather than removing the underlying cause), but I would strongly recommend turning on most of clippy's lints if you want to learn rust.
[0] https://rust-unofficial.github.io/patterns/anti_patterns/bor...
(StrongDM AI team member here)
This is great feedback, appreciate you taking the time to post it. I will set some agents loose on optimization / purification passes over CXDB and see which of these gaps they are able to discover and address.
We only chose to open source this over the past few days so it hasn't received the full potential of technical optimization and correction. Human expertise can currently beat the models in general, though the gap seems to be shrinking with each new provider release.
Hey! That sounds an awful lot like code being reviewed by humans
This is why I think AI generated code is going nowhere. There's actual conceptual differences that the stotastic parrot cannot understand, it can only copy patterns. And there's no distinction between good and bad code (IRL) except for that understanding
They have a Products page where they list a database and an identity system in addition to attractors: https://factory.strongdm.ai/products
For those of us working on building factories, this is pretty obvious because once you immediately need shared context across agents / sessions and an improved ID + permissions system to keep track of who is doing what.
I don't know if that is crazy or a glimpse of the future (could be both).
PS: TIL about "Canadian girlfriend", thanks!
I was about to say the same thing! Yet another blog post with heaps of navel gazing and zero to actually show for it.
The worst part is they got simonw to (perhaps unwittingly or social engineering) vouch and stealth market for them.
And $1000/day/engineer in token costs at current market rates? It's a bold strategy, Cotton.
But we all know what they're going for here. They want to make themselves look amazing to convince the boards of the Great Houses to acquire them. Because why else would investors invest in them and not in the Great Houses directly.
The "social engineering" is that I was invited to a demo back in October and thought it was really interesting.
(Two people who's opinions I respect said "yeah you really should accept that invitation" otherwise I probably wouldn't have gone.)
I've been looking forward to being able to write more details about what they're doing ever since.
You don't see why a company would gain to invite bloggers that will happily write positively about them? Talk about a conflict of interest, the FTC should ban companies from doing this.
Justin never invites me in when he brings the cool folks in! Dang it...
Is this the black box folks you mentioned?
It's the dark factory people, yeah: https://news.ycombinator.com/item?id=46739117#46801848
I will look forward to that blog post then, hopefully it has more details than this one.
EDIT nvm just saw your other comment.
I think this comment is slightly unfair :(
We’ve been working on this since July, and we shared the techniques and principles that have been working for us because we thought others might find them useful. We’ve also open-sourced the nlspec so people can build their own versions of the software factory.
We’re not selling a product or service here. This also isn’t about positioning for an acquisition: we’ve already been in a definitive agreement to be acquired since last month.
It’s completely fair to have opinions and to not like what we’re putting out, but your comment reads as snarky without adding anything to the conversation.
Can you link to nlspec? It is not easy to find with a search.
That's in this repo: https://github.com/strongdm/attractor
https://github.com/strongdm/attractor
"You" (the whole AI industry in general) are showing a potential future where me, my friends, and potentially the entire industry will be destitute. And you don't even give us the courtesy of showing the actual measurable receipts. You will forgive me for being a bit snarky.
Why will you be destitute? Consider this: how do billionaires make most of their money?
I’ll answer you: people buy their stuff.
What happens if nobody has jobs? Oh, that’s right! Nobody’s buying stuff.
Then what happens? Oh yeah! Billionaires get poorer.
There’s a very rational, self-interested reason sama has been running UBI pilots and Elon is also talking about UBI - the only way they keep more money flowing into their pockets is if the largest number of people have disposable income.
That's hilarious
So I am on a web cast where people working about this. They are from https://docs.boundaryml.com/guide/introduction/what-is-baml and humanlayer.dev Mostly are talking about spec driven development. Smart people. Here is what I understood from them about spec driven development, which is not far from this AFAIU.
Lets start with the `/research -> /plan -> /implement(RPI)`. When you are building a complex system for teams you _need_ humans in the loop and you want to focus on design decisions. And having structured workflows around agents provides a better UX to those humans make those design decisions. This is necessary for controlling drift, pollution of context and general mayhem in the code base. _This_ is the starting thesis around spec drive development.
How many times have you working as a newbie copied a slash command pressed /research then /plan then /implement only to find it after several iterations is inconsistent and go back and fix it? Many people still go back and forth with chatgpt copying back and forth copying their jira docs and answering people's question on PRD documents. This is _not_ a defence it is the user experience when working with AI for many.
One very understandable path to solve this is to _surface_ to humans structured information extracted from your plan docs for example:
https://gist.github.com/itissid/cb0a68b3df72f2d46746f3ba2ee7...
In this very toy spec driven development the idea is that each step in the RPI loop is broken down and made very deterministic with humans in the loop. This is a system designed by humans(Chief AI Officer, no kidding) for teams that follow a fairly _customized_ processes on how to work fast with AI, without it turning into a giant pile of slop. And the whole point of reading code or QA is this: You stop the clock on development and take a beat to see the high signal information: Testers want to read tests and QAers want to test behavior, because well written they can tell a lot about weather a software works. If you have ever written an integration test on a brownfield code with poor test coverage, and made it dependable after several days in the dark, you know what it feels like... Taking that step out is what all VCs say is the last game in town.. the final game in town.
This StrongDM stuff is a step beyond what I can understand: "no humans should write code", "no humans should read code", really..? But here is the thing that puzzles me even more is that spec driven development as I understand it, to use borrowed words, is like parents raising a kid — once you are a parent you want to raise your own kid not someone else's. Because it's just such a human in the loop process. Every company, tech or not, wants to make their own process that their engineers like to work with. So I am not sure they even have a product here...