This is extremely funny. AI can't have accountability. Good luck with that.
Use AI to augment but don't really replace it as a 100% system if you can't predict and own up the failure rate.
My advice would be to use more configurable tools with less interest on selling fake perfection. Aider works.
> AI can't have accountability
Sure it can. You just have to bake into the reward function "if you do the wrong thing, people will stop using you, therefore you need to avoid the wrong thing".
Then you wind up at self-preservation and all the wholly shady shit that comes along with it.
I think the AI accountability problem is the crux of the "last-mile" problem in AI, and I don't think you can necessarily solve it without solving it in a way that produces results you don't want.
I don't want to get into a semantics argument but that's not accountability. That's just one more behavior prompt/indication but you can't fire the LLM. It might still do the wrong thing.