Your assumption is a bad read because it only works if his set of life priorities contains nothing else but maximizing his impact in the world of AI.
If he has just one other priority in that set (which could still include a robotic min/max of AI impact), then your assumption fails.