Not as much upfront. I had plenty of opportunities to adjust and correct along the way. With AI, the cost of not thinking upfront is high and the cost of being wrong in upfront decisions is low, so we bias towards that.

But beyond that, I have been thinking deeply about AI itself, which has all sorts of new problems. Permissions, verification, etc.

> With AI, the cost of not thinking upfront is high and the cost of being wrong in upfront decisions is low, so we bias towards that.

I don't really understand what that means:

1. If the cost of not thinking upfront is high, that means you need to think upfront.

2. If the cost of being wrong upfront is low, that means you don't need to think upfront.

To me, it looks like those assertions contradict each other.