> Category theory is what you get when you take mappings instead of sets as the primitive objects of your universe.
I'm not sure about that, because you still need some concept of set (or collection or class) to define a category, because you need a set of objects and mappings between them (technically that's a "small" category, but to define any larger category would require at least as much set-theoretical complication).
More exactly, whereas in set theory it's the membership relation between sets and their elements that is basic, in category theory it's the mapping between objects.
Nevertheless, the basic concepts of set theory can also be defined within category theory, so in that sense they're inter-translatable. In each case though, you need some ambient idea of a collection (or class or set) of the basic objects. Tom Leinster has a brilliantly clear and succinct (8 pages) exposition of how this is done here https://arxiv.org/abs/1212.6543
The thing is, even defining first-order logic requires a (potentially infinite) collection of variables and constant terms; and set theory is embedded in first-order logic, so both set theory and category theory are on the same footing in seemingly requiring a prior conception of some kind of potentially infinite "collection". To be honest I'm a bit puzzled as to how that works logically
Defining first-order logic doesn't really require set theory, but it does require some conception of natural numbers. Instead of saying there's an infinite set of variables, you can have a single symbol x and a mark *, and then you can say a variable is any string consisting of x followed by any number of marks. You can do the same thing with constants, relation symbols and function symbols. This does mean there can only be countably many of each type of symbol but for foundational purposes that's fine since each proof will only mention finitely many variables.
Allowing uncountably many symbols can be more convenient when you apply logic in other ways, e.g. when doing model theory, but from a foundational perspective when you're doing stuff like that you're not using the "base" logic but rather using the formalized version of logic that you can define within the set theory that you defined using the base logic.
Thanks, that sharpens it then to a question about natural numbers, or at least some idea of an indefinitely extensible collection of unique elements: it seems the ordering on the numbers isn't required for a collection of variables, we just need them to be distinct.
I don't think you need set theory to define set theory (someone would have noticed that kind of circularity), but there still seems to be some sleight of hand in the usual presentations, with authors often saying in the definition of first-order logic prior to defining set theory that "there is an infinite set of variables". Then they define a membership relation, an empty set, and then "construct the natural numbers"... But I guess that's just a sloppiness or different concern in the particular presentation, and the seeming circularity is eliminable.
Maybe instead of saying at the outset that we require natural numbers, wouldn't it be enough to give an operation or algorithm which can be repeated indefinitely many times to give new symbols? This is effectively what you're illustrating with the x, x*, x**, etc.
If that's all we need then it seems perfectly clear, but this kind of operational or algorithmic aspect of the foundation of logic and mathematics isn't usually acknowledged, or at least the usual presentation don't put it in this way, so I'm wondering if there's some inadequacy or incompleteness about it.*