As someone who got into ongoing multi-agent systems (MAS) research relatively recently (~4 years, mostly in distributed optimization), I see two major strands of it. Both of which are certainly still in search of the magical "emergence":

There is the formal view of MAS that is a direct extension of older works with cooperative and competitive agents. This tries to model and then prove emergent properties rigorously. I also count "classic" distributed optimization methods with convergence and correctness properties in this area. Maybe the best known application of this are coordination algorithms for robot/drone swarms.

Then, as a sibling comment points out, there is the influx of machine learning into the field. A large part of this so far was multi-agent reinforcement learning (MARL). I see this mostly applied to any "too hard" or "too slow" optimization problem and in some cases they seem to give impressive results.

Techniques from both areas are frequently mixed and matched for specific applications. Things like agents running a classic optimization but with some ML-based classifications and local knowledge base. What I see actually being used in the wild at the moment are relatively limited agents, applied to a single optimization task and with frequent human supervision.

More recently, LLMs have certainly taken over the MAS term and the corresponding SEO. What this means for the future of the field, I have no idea. It will certainly influence where research funding is allocated. Personally, I find it hard to believe LLMs would solve the classic engineering problems (speed, reliability, correctness) that seem to hold back MAS in more "real world" environments. I assume this will instead push research focus into different applications with higher tolerance for weird outputs. But maybe I just lack imagination.