<< you do not know where that missing 25% is wrong

I think there is something to this line of thinking. I just finished a bigger project and without going into details, one person from team supposedly dedicated to providing viable data about data was producing odd results. Since the data was not making much sense, I asked for info on how the data was produced. I was given SQL script and 'and then we applied some regex' explanation.

Long story short, I dig in and find that applied regex apparently messed with dates in an unexpected way and I knew because I knew the 'shape' that data was expected to have. I corrected it, because we were right around the deadline, but.. I noted it.

Anyway, I still see llm as a tool, but I think there is some reckoning on the horizon as:

1. managers push for more use and speed given that new tool 2. getting there faster wronger, because people go with 1 and do not check the output ( or don't know how to check it or don't know when its wrong )

It won't end well, because the culture does not reward careful consideration.

Exactly. I use ai tools daily and they bite me. Not enough to stop but enough to know. Recently was building a ws merger of sorts based on another libs sub protocol. I wasnt familiar with the language or protocol but ai sure was. However the ai used a wrong id when repacking messages. Unless i knew the spec (which i didnt) i never would have known. Eventually, i did read the spec and figured it out.

To be clear here i give the spec to ai many times asking what was off and it never found the issue.

Once i did get it working, ai one shotted converting it from python to go with the exception of the above mistake being added back in again.

You dont know what you dont know. That final 25% or 5% or whatever is where the money is at, not the 80%. Almost doesnt count.