Not surprising since concepts are virtual. There is a person, a person with a partner is a couple. A couple with a kid is a family. That’s 5 concepts alone.

I’m not sure you grok how big a number 10^43741 is.

If we assume that a "concept" is something that can be uniquely encoded as a finite string of English text, you could go up to concepts that are so complex that every single one would take all the matter in the universe to encode (so say 10^80 universes, each with 10^80 particles), and out of 10^43741 concepts you’d still have 10^43741 left undefined.

A concept space of 10^43741 needs about 43741*3 bits to identify each concept uniquely (by the information theoretic concept of bit, which is more a lower bound on what we traditionally think of as bits in the computer world than a match), or about 16000-ish "bytes", which you can approximate reasonably as a "compressed text size". There's a couple orders of magnitude of fiddling around the edges you can do there but you still end up with human-sized quantities of information to identify specific concepts in a space that size rather than massively-larger-than-the-universe sized.

Things like novels come from that space. We sample it all the time. Extremely, extremely sparsely, of course.

Or to put it another way, in a space of a given size, identifying a specific component takes the log2 of the space's size in bits to identify a concept, not something the size of the space itself. 10^43741 is a very large space by our standards, but the log2 of it is not impossibly large.

If it seems weird for models to work in this space, remember that as the models themselves in their full glory are clocking in at multiple hundreds of gigabytes that the space of possible AIs using this neural architecture is itself 2^trillion-ish, which makes 10^43741 look pedestrian. Understanding how to do anything useful with that amount of possibility is quite the challenge.