> Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough.
In other words: motivated reasoning.
> Instead they believe model alignment, trying to understand when a user is doing a dangerous task, etc. will be enough.
In other words: motivated reasoning.