One of my all-time favorite interviews was with a YC company where the CTO jumped on and the first interview was a code review of a backend. Check for missing indexes, sub-optimal data types, missing validations, exception handling, performance optimizations, missing `await`, etc.

Really great interview and what I realized from that experience is that this format is so good because it really measures for both breadth and depth without a lot of pressure. Here, you're not measuring so much for right and wrong, but how experienced an engineer is in real-world scenarios; everyone can find something to fix in the code, but more senior engineers will find more, faster, with less hints.

I think it also reflected day-to-day responsibilities more and in this AI agent era, reflects the needs for an engineer to carefully review AI generated code for correctness, performance, security, and fit-for-purpose.

I enjoyed the exercise so much, I ended up building a simple, OSS tool to facilitate this kind of interview using code reviews:

https://coderev.app (https://github.com/CharlieDigital/coderev)