Dude, this generation of AI video models are just starting to have basic camera production terms understood, and then it is exactly like LLM generation: it's a pull of a slot machine arm; you might get what you want, but that's "winning" and the slot machine only gives out winners one in every 100 pulls. Every possible thing that could not be right happens.
For example, I'm working with a walking and talking character at this time using multiple AI video models and systems. Generated clips any length longer than 8 seconds risk rapid quality loss, but sometimes you can get up to 12-19 seconds without the generation breaking down. That means one needs to simulate a multiple camera shoot on a stage, so you can cut around the character(s) and create a longer sequence. But now you need to have multiple views of the same location to place your character(s) into - and current AI models can't reliably give you a "different angled views" of an environment. We just got consistent different views of characters, and it'll be another period until environments can be generally examined from any view. BUT, that's if people realize this is not in the models yet, and so far people are so fascinated by the fantasy violence and sexual content they can make nobody realizes you cannot simply "look left and right" in any of these models and that even works with consistency or reliability. There are workarounds, like creating one's entire set and environments in 3D models, for use as the backgrounds and starting frames, but that's now 3D media production + AI, and none of the AI tools generate media that even has alpha channels, and a lot of similar incompatibilities like that.