> Engineers who know their craft can still smell the slop from miles away when reviewing it, despite the "advances" made. It comes in the form of overly repetitive code, unnecessary complexity, and a reluctance to really refactor anything at all, even when it's clearly stale and overdue.
I’ve seen reluctance to refactor even 10+-year-old garbage long before LLMs were first made available to the broader public.
LLM-generated snippets of code are a breath of fresh air compared with much legacy code. Since models learn probability distributions they gravitate to the most common ways of doing things. Almost like having a linter built in. On the other hand, legacy code often does things in novel ways that leave you scratching your head--the premise behind sites like https://thedailywtf.com/
If it's lasted 10 years and someone is still using it after all that time, that seems like a pretty good signal there's a lot of value in the 'garbage'?
I've seen a lot of 'fixes' for 10 year old 'garbage' that turned out to be regressions for important use cases that the author of the 'fix' wasn't aware of.