You pay for the privilege of paying for what you use - every second of CPU time when a lambda is running is marked up immensely versus the same second of compute on bare metal or even a VPS. So your workload needs to be sufficiently "duck curved", parabolic, or erratically spiky in order to _actually_ make cost savings on compute.
The personnel matter is harder to quantify. But note that the need for infra skills didn't go away with cloud. Cloud is complicated, you still need people who understand it, and that still costs money. Be it additional skills spread across developers, or dedicated cloud experts depending on organisation size. These aren't a far cry from sysadmims. It really depends on the skillset of your individual team. These days traditional hosting has got so much easier with so much automation, that it's not as specialist a skill or as time consuming or complicated as many people think it is.
Cloud _can_ be cheaper, but you need the correct mix of requirements and skills gap to make it actually cheaper.