Category theory is an abstraction. It is the next level of abstraction in the chain of abstractions that starts with the ones we first encounter in undergraduate mathematics when we learn about vector spaces and groups, or maybe metric and topological spaces.
At the high school level, we are taught that vectors are "quantities with magnitude and direction", and shown a list of highly concrete examples such as displacement, velocity, acceleration, and force. Going up very slightly on the ladder of abstraction, we come to understand that vectors are things with numerical "components", which can be scaled and added together component-wise. So, for example, rows of matrices can be thought of as vectors, and so can columns. This is still very concrete and constructive (since we are defining vectors by showing their construction in terms of components). Now of course, we can prove several properties that these vectors have, like distributivity of scalar multiplication over vector addition, and so on.
All this changes when we start learning linear algebra (proper). Now, vectors are elements of a vector space. A vector space itself is just a collection of elements satisfying a certain list of properties (the same properties that we had proved earlier using the "constructive" definition), usually called the "axioms of vector spaces". And the strange thing is, nobody cares what these elements, these vectors, really are, provided they satisfy the axioms! For example, the solutions of a linear homogeneous differential equation can form a vector space — something we might not have considered before. That, exactly, is abstraction.
The first and biggest benefit is minimality. We now have a very clean space to work in. All unnecessary clutter is absent, and only the barest details are available for use in proving whatever we wish to prove. Such minimality is the main advantage of abstract mathematics, and that shouldn't be counterintuitive at all — when we have very little to work with, there are only a few options to consider while trying to decide on the line of proof.
Another benefit is generality. Now, anything we prove in our abstract vector space applies to vectors of any concrete vector space, including the familiar old vectors from high school physics and basic matrix theory, as well as the plethora of strange new vectors such as "solutions of differential equations", and "sets of periodic functions". This is obvious and needs no further explanation.
In the same way, group theory is the abstraction of the study of permutations. Metric spaces are the abstraction of the study of distances. Topology is the abstraction of the study of continuous functions. Lattice theory is the abstraction of the study of order. Matroid theory is the abstraction of the study of linear independence (yes, a further abstraction of vector spaces!).
But in all of these different fields — set theory, linear algebra, group theory, ring theory, field theory, topology, lattice theory, matroid theory — the objects of study are some sort of structured sets. And we don't get far by studying each such structured set in isolation — very soon, we are forced to look at how they interact with each other, via structure-preserving maps (homomorphisms) between them. And when we go deep enough into any of these fields, we break out of the boundary and reach into one or more of the other fields. We study the homotopy groups of topological spaces, topologies of algebraic varieties, endomorphism rings of groups, and so on.
Evidently, then, the next step of abstraction is to axiomatize such categories of objects and maps. And very much as we did in the case of abstract vector spaces, where we mentioned nothing about what vectors look like on the inside, we do not want to say anything about the internal structure of the objects in a category. Instead, we only list the rules that the maps between categories have to satisfy. And not surprisingly, a category may be something as simple as a collection of structured sets and their homomorphisms, or something as bizarre as a collection of interlinked morphisms of an abstract category and "higher order" morphisms between them.
In category theory, we study objects not by opening them up and peering inside them, but instead by looking at how they behave with all the other objects in the space that they live in. We have climbed up quite a bit on the ladder of abstraction.
At the high school level, we are taught that vectors are "quantities with magnitude and direction", and shown a list of highly concrete examples such as displacement, velocity, acceleration, and force. Going up very slightly on the ladder of abstraction, we come to understand that vectors are things with numerical "components", which can be scaled and added together component-wise. So, for example, rows of matrices can be thought of as vectors, and so can columns. This is still very concrete and constructive (since we are defining vectors by showing their construction in terms of components). Now of course, we can prove several properties that these vectors have, like distributivity of scalar multiplication over vector addition, and so on.
All this changes when we start learning linear algebra (proper). Now, vectors are elements of a vector space. A vector space itself is just a collection of elements satisfying a certain list of properties (the same properties that we had proved earlier using the "constructive" definition), usually called the "axioms of vector spaces". And the strange thing is, nobody cares what these elements, these vectors, really are, provided they satisfy the axioms! For example, the solutions of a linear homogeneous differential equation can form a vector space — something we might not have considered before. That, exactly, is abstraction.
The first and biggest benefit is minimality. We now have a very clean space to work in. All unnecessary clutter is absent, and only the barest details are available for use in proving whatever we wish to prove. Such minimality is the main advantage of abstract mathematics, and that shouldn't be counterintuitive at all — when we have very little to work with, there are only a few options to consider while trying to decide on the line of proof.
Another benefit is generality. Now, anything we prove in our abstract vector space applies to vectors of any concrete vector space, including the familiar old vectors from high school physics and basic matrix theory, as well as the plethora of strange new vectors such as "solutions of differential equations", and "sets of periodic functions". This is obvious and needs no further explanation.
In the same way, group theory is the abstraction of the study of permutations. Metric spaces are the abstraction of the study of distances. Topology is the abstraction of the study of continuous functions. Lattice theory is the abstraction of the study of order. Matroid theory is the abstraction of the study of linear independence (yes, a further abstraction of vector spaces!).
But in all of these different fields — set theory, linear algebra, group theory, ring theory, field theory, topology, lattice theory, matroid theory — the objects of study are some sort of structured sets. And we don't get far by studying each such structured set in isolation — very soon, we are forced to look at how they interact with each other, via structure-preserving maps (homomorphisms) between them. And when we go deep enough into any of these fields, we break out of the boundary and reach into one or more of the other fields. We study the homotopy groups of topological spaces, topologies of algebraic varieties, endomorphism rings of groups, and so on.
Evidently, then, the next step of abstraction is to axiomatize such categories of objects and maps. And very much as we did in the case of abstract vector spaces, where we mentioned nothing about what vectors look like on the inside, we do not want to say anything about the internal structure of the objects in a category. Instead, we only list the rules that the maps between categories have to satisfy. And not surprisingly, a category may be something as simple as a collection of structured sets and their homomorphisms, or something as bizarre as a collection of interlinked morphisms of an abstract category and "higher order" morphisms between them.
In category theory, we study objects not by opening them up and peering inside them, but instead by looking at how they behave with all the other objects in the space that they live in. We have climbed up quite a bit on the ladder of abstraction.