Whether it's being used inappropriately for production use and studying it to understand how to make it not be irresponsible to use in production are very separate things. What you're implying is that we should somehow magically leapfrog the current state of the art to a future version that solves all the problems with the current generation. Or, that we should ignore the technology entirely because developing it through the period where it's less robust than a mature human is too reckless.

The answer is that doing research isn't mutually exclusive with using the technology in appropriate ways. You can responsibly use AI while folks study threat models and model behavior for use cases that aren't able to be deployed responsibly.

> by attempting to lock down the environment as best we can

We literally do this as a best practice generally for traditional systems and human access. It even has a name: least privilege.