> AI is currently the equivalent of a very smart child. Would you give production access to a child?

I mean this is my point! Why are we asking a child to do anything remotely important at all?

Maybe we should wait until the tech is an adult before we start having it do important things for us.

Mitigating the naiveness and recklessness of a child AI by attempting to lock down the environment as best we can seems foolish and short sighted to me and will probably not end well.

Whether it's being used inappropriately for production use and studying it to understand how to make it not be irresponsible to use in production are very separate things. What you're implying is that we should somehow magically leapfrog the current state of the art to a future version that solves all the problems with the current generation. Or, that we should ignore the technology entirely because developing it through the period where it's less robust than a mature human is too reckless.

The answer is that doing research isn't mutually exclusive with using the technology in appropriate ways. You can responsibly use AI while folks study threat models and model behavior for use cases that aren't able to be deployed responsibly.

> by attempting to lock down the environment as best we can

We literally do this as a best practice generally for traditional systems and human access. It even has a name: least privilege.