Safety is a nice idea but it’s not structurally pursuable at this point. Everything is moving too quickly and we don’t exactly know what is useful or not, just like we don’t know what’s safe or not.
Anyone pursuing safety will be outcompeted by someone who isnt. Given the amount of investments there is no patience for any calls to slow down. I tend to believe this won’t actually end in disaster as I don’t think it’s actually economical to put AI everywhere with enough real control that we can’t manage the risks as they evolve, but it’s a low confidence prediction.