To pick a few (from the server crate, because that's where I looked):
- The StoreError type is stringly typed and generally badly thought out. Depending on what they actually want to do, they should either add more variants to StoreError for the difference failure cases, replaces the strings with a sub-types (probably enums) to do the same, or write a type erased error similar to (or wrapping) the ones provided by anyhow, eyre, etc, but with a status code attached. They definitely shouldn't be checking for substrings in their own error type for control flow.
- So many calls to String::clone [0]. Several of the ones I saw were actually only necessary because the function took a parameter by reference even though it could have (and I would argue should have) taken it by value (If I had to guess, I'd say the agent first tried to do it without the clone, got an error, and implemented a local fix without considering the broader context).
- A lot of errors are just ignored with Result::unwrap_or_default or the like. Sometimes that's the right choice, but from what I can see they're allowing legitimate errors to pass silently. They also treat the values they get in the error case differently, rather than e.g. storing a Result or Option.
- Their HTTP handler has an 800 line long closure which they immediately call, apparently as a substitute for the the still unstable try_blocks feature. I would strongly recommend moving that into it's own full function instead.
- Several ifs which should have been match.
- Lots of calls to Result::unwrap and Option::unwrap. IMO in production code you should always at minimum use expect instead, forcing you to explain what went wrong/why the Err/None case is impossible.
It wouldn't catch all/most of these (and from what I've seen might even induce some if agents continue to pursue the most local fix rather than removing the underlying cause), but I would strongly recommend turning on most of clippy's lints if you want to learn rust.
[0] https://rust-unofficial.github.io/patterns/anti_patterns/bor...
(StrongDM AI team member here)
This is great feedback, appreciate you taking the time to post it. I will set some agents loose on optimization / purification passes over CXDB and see which of these gaps they are able to discover and address.
We only chose to open source this over the past few days so it hasn't received the full potential of technical optimization and correction. Human expertise can currently beat the models in general, though the gap seems to be shrinking with each new provider release.
Hey! That sounds an awful lot like code being reviewed by humans
This is why I think AI generated code is going nowhere. There's actual conceptual differences that the stotastic parrot cannot understand, it can only copy patterns. And there's no distinction between good and bad code (IRL) except for that understanding