Category theory is popular in computer science because, at a fundamental level, they're very compatible ways of seeing the world.
In computing, we think about:
- a set of states
- with transformations between them
- including a ‘do nothing’ transformation
- that can be composed associatively (a sequence of statements `{a; b;}; c` transforms the state in the same way as a sequence of statements `a; {b; c;}`)
- but only in certain ways: some states are unreachable from other states
This is exactly the sort of thing category theory studies, so there's a lot of cross-pollination between the disciplines. Computation defines interesting properties of certain categories like ‘computation’ or ‘polynomial efficiency’ that can help category theorists track down interesting beasts to study in category theory and other branches of mathematics that have their own relationships to category theory. Meanwhile, category theory can give suggestions to computer science both about what sort of things the states and transformations can mean and also what the consequences are of defining them in different ways, i.e. how we can capture more expressive power or efficiency without straying too far from the comfort of our ‘do this then do that’ mental model.
This latter is really helpful in computer science, especially in programming language or API design, because in general it's a really hard problem to say, given a particular set of basic building blocks, what properties they'll have when combined in all the possible ways. Results in category theory usually look like that: given a set of building blocks of a particular form, you will always be able to compose them in such a way that the result has a desired property; or, no matter how they're combined, the result will never have a particular undesired property.
As an aside, it's common in a certain computer science subculture (mostly the one that likes category theory) to talk about computing in the language of typed functional programming, but if you don't already have a deep understanding of how functional programming represents computation this can hide the forest behind the trees: when a functional programmer says ‘type’ or ‘typing context’ you can think about sets of potential (sub)states of the computer.
Still, what's in your opinion, the advantage of thinking in category theory rather than set theory? (For programming, not - algebraic geometry.)
I mean, all examples I heard can be directly treated with groups, monoids, and regular functions.
I know some abstract concepts that can be defined in a nice way with CT but not nearly as easy - set theory, e.g. (abstract) tensor product. Yet, for other concepts, including quantum mechanics, I have found that there is "abstract overhead" of CT with little added value.
In my opinion, the important advantage of category theory over set theory in (some!) computational contexts is that it allows you to generalize more easily. Generalizing from sets and functions to objects and morphisms lets you play around with instantiating those objects and morphisms with a variety of different beasts while maintaining the structure you've built up on top of them, and you can even use it to modularly build towers of functionality by layering one abstraction on top of another, even if you choose later to instantiate one of those layers with good old sets and functions. By contrast, it's hard to imagine treating something like async functions with plain old set theory: while there is of course a way to do it, you'd have to reason about several different layers of abstraction together to get all the way down to sets in one step.