At least video games use way more complex models for that, AFAIK. It might be tricky to apply to mixes of recorded media, so loudness is commonly used there.
At least video games use way more complex models for that, AFAIK. It might be tricky to apply to mixes of recorded media, so loudness is commonly used there.
Unreal Engine, the only engine I'm more familiar with, implements VBAP which is just amplitude panning when played through loudspeakers for panning of 3D moving sources. It also allows Ambisonics recordings for ambient sound which is then decoded into 7.1.
For headphone based spatialization (binaral synthesis) usually virtual Ambisonics fed into HRTF convolution is used, which is not amplitude based, specially height is encoded using spectral filtering.
So loudspeakrs -> mostly amplitude based, headphones not amplitude based.
Which makes sense, there is only so much you can do with loudspeakers to affect the perceived location, you don't really know where the loudspeakers and the listener are located relative to each other.
Actually, the farther way the speakers are from the angles specified in the 7.1 format (see https://www.dolby.com/about/support/guide/speaker-setup-guid...) worse will be the localization accuracy. And if the the person is not sitting centered relative to the loudspeakers, but closer to one of the loudspeakers localization can completely collapse, and it will sound like the sound only comes from the closest loudspeaker.
In the case of gamers, they are usually centered relative to the loudspeakers, and usually the loudspeakers tend to be placed symmetrical to the computer screen, so the problem is not so bad.
For cinema viewers sitting in the cinema the problem is much worse, most of the audience is off center... That is why 7.1 has a center loudspeakers, the dialogue is sent directly there to make sure that at least the dialogue comes the right direction.