It depends on what you are using it for, and how you are using it. If you are using AI to write short functions that you could code yourself in close to the same time as reviewing the AI generated code, then obviously there is no benefit.
There are however various cases where using AI can speed development considerably. One case is larger complex project (thousands of LOC) where weeks of upfront design would have been followed by weeks/months of implementation and testing.
You are still going to do the upfront design work (no vibe coding!) and play role of lead developer breaking the work into manageable pieces/modules, but now there is value in having the AI write, test and debug the code, including generating unit tests, since this would otherwise have been a lengthy process.
This assumes you are using a very recent capable frontier model in an agentic way (e.g. Claude Code, or perhaps Claude web's Code Interpreter for Python development) so that the output is debugged and tested code. We're not talking about just having the AI generate code that you then need to fix and test.
This also assumes that this is a controlled managed process. You are not vibe coding, but rather using the AI as a pair-programmer working on one module at a time. You don't need to separately review the code line by line, but you need to be aware of what is being generated, and what tests are being run, so that you have similar confidence in the output that you might have done if you'd pair-programmed it with a human, or perhaps delegated it to someone else with sufficient specifications that "tested code meeting specs" means you don't have to review the code in detail unless you choose to.
I haven't tried it myself, but note that you can also use AI to do code reviews, based on a growing base of code standards and guidelines that you provide. This can then be used as part of the code development process so that the agent writing the code iterates until it passes code review as well as unit tests.