The framing around AI autonomy in national security contexts is genuinely new territory. What's interesting from an agent design perspective is the underlying question: how much should an AI system push back on institutional structures vs. defer to human oversight chains? The soul spec approach -- where the AI internalizes safe behavior rather than just following rules -- might be more relevant here than it first appears.