## Wednesday, March 15, 2017

Category theory is an abstraction. It is the next level of abstraction in the chain of abstractions that starts with the ones we first encounter in undergraduate mathematics when we learn about vector spaces and groups, or maybe metric and topological spaces.

At the high school level, we are taught that vectors are "quantities with magnitude and direction", and shown a list of highly concrete examples such as displacement, velocity, acceleration, and force. Going up very slightly on the ladder of abstraction, we come to understand that vectors are things with numerical "components", which can be scaled and added together component-wise. So, for example, rows of matrices can be thought of as vectors, and so can columns. This is still very concrete and constructive (since we are defining vectors by showing their construction in terms of components). Now of course, we can prove several properties that these vectors have, like distributivity of scalar multiplication over vector addition, and so on.

All this changes when we start learning linear algebra (proper). Now, vectors are elements of a vector space. A vector space itself is just a collection of elements satisfying a certain list of properties (the same properties that we had proved earlier using the "constructive" definition), usually called the "axioms of vector spaces". And the strange thing is, nobody cares what these elements, these vectors, really are, provided they satisfy the axioms! For example, the solutions of a linear homogeneous differential equation can form a vector space — something we might not have considered before. That, exactly, is abstraction.

The first and biggest benefit is minimality. We now have a very clean space to work in. All unnecessary clutter is absent, and only the barest details are available for use in proving whatever we wish to prove. Such minimality is the main advantage of abstract mathematics, and that shouldn't be counterintuitive at all — when we have very little to work with, there are only a few options to consider while trying to decide on the line of proof.

Another benefit is generality. Now, anything we prove in our abstract vector space applies to vectors of any concrete vector space, including the familiar old vectors from high school physics and basic matrix theory, as well as the plethora of strange new vectors such as "solutions of differential equations", and "sets of periodic functions". This is obvious and needs no further explanation.

In the same way, group theory is the abstraction of the study of permutations. Metric spaces are the abstraction of the study of distances. Topology is the abstraction of the study of continuous functions. Lattice theory is the abstraction of the study of order. Matroid theory is the abstraction of the study of linear independence (yes, a further abstraction of vector spaces!).

But in all of these different fields — set theory, linear algebra, group theory, ring theory, field theory, topology, lattice theory, matroid theory — the objects of study are some sort of structured sets. And we don't get far by studying each such structured set in isolation — very soon, we are forced to look at how they interact with each other, via structure-preserving maps (homomorphisms) between them. And when we go deep enough into any of these fields, we break out of the boundary and reach into one or more of the other fields. We study the homotopy groups of topological spaces, topologies of algebraic varieties, endomorphism rings of groups, and so on.

Evidently, then, the next step of abstraction is to axiomatize such categories of objects and maps. And very much as we did in the case of abstract vector spaces, where we mentioned nothing about what vectors look like on the inside, we do not want to say anything about the internal structure of the objects in a category. Instead, we only list the rules that the maps between categories have to satisfy. And not surprisingly, a category may be something as simple as a collection of structured sets and their homomorphisms, or something as bizarre as a collection of interlinked morphisms of an abstract category and "higher order" morphisms between them.

In category theory, we study objects not by opening them up and peering inside them, but instead by looking at how they behave with all the other objects in the space that they live in. We have climbed up quite a bit on the ladder of abstraction.

## Sunday, January 8, 2017

### Chapter 1 section 2 allufi chapter 0 problems and solutions

2.2: Prove statement (2) in proposition 2.1.

You may assume that given a family of disjoint subset of a set, there is a way to choose one element in each member of the family.

Proposition 2.1)

Assume $A \neq \emptyset$, and let $f : A \rightarrow B$ be a function. Then,
1)f has a right inverse iff it is surjective.

Suppose that f has a right inverse g. We wish to show that f is surjective. Let $b \in B$, consider $f(g(b)) = id_B(b) = b$. Thus, $f( g(b)) = b \implies f$ is surjective.

Conversely, suppose that f is surjective, then we wish to show that f has a right inverse. We shall define $g(b)$ as follows. Consider the set $A_b = \{a \in A : f(a) = b) \}$. Then, this set is non-empty as f is surjective.

Then, select an element $a_{b} \in A_b$. Define $g(b) = a_b$. Similarly, extend this function to the whole domain B, by defining $g(b) = a_b$ constructed in the same way. Then, the constructed function $g : B \rightarrow A$ is a right inverse of $f : A \rightarrow B$ by construction. $\square$

2.3: Prove that the inverse of a bijection is a bijection, and that the composition of two bijection is a bijection.

Suppose that $f : A \rightarrow B$ is a bijection and $f^{-1} : B \rightarrow A$ is the inverse.
Recall $f^{-1} : B \rightarrow A$ is defined as $f^{-1}(b) = a \iff f(a) = b$. Suppose that $c = f^{-1}(b_1) = f^{-1}(b_2)$. This means that $f(c) = b_1$ and $f(c) = b_2$. Since f is a function, this means that $b_1 = b_2$. Therefore $f^{-1}$ is an injection.

Now we wish to show that $f^{-1}$ is a surjection. Let $a \in A$ be arbitrarily element, then consider $f(a) \in B$. Then, $f^{-1}(f(a)) = a \implies f^{-1}$ is surjective. $\square$

Suppose that $f : A \rightarrow B$ and $g: B \rightarrow D$ are two bijective functions.
Consider $h := g \circ f : A \rightarrow D$. Then this is a bijection function since it has two sided inverse $f^{-1} \circ g^{-1} : D \rightarrow A$.  $\square$

2.5: Formulate a notion of epimorphism, in the style of monomorphism seen in 2.6, and prove a result analogous to proposition 2.3, for epimorphisms and surjections.

Formulation: (Here the category we are considering is that of Sets).

A function $f : A \rightarrow B$ is a epimorphisms iff for all sets $Z$ and for all functions such that $\alpha_1,\alpha_2 : B \rightarrow Z$ such that $\alpha_1 \circ f = \alpha \circ f \implies \alpha_1 = \alpha_2$.

Conjecture:
$f : A \rightarrow B$ is a epimorphism $\iff$ f is surjective.

Suppose that $f : A \rightarrow B$ is surjection and suppose Z is a set and $\alpha_1,\alpha_2 : B \rightarrow Z$ such that $\alpha_1 \circ f = \alpha_2 \circ f$. Since f is surjective, so this means that it has a right inverse g $\implies (\alpha_1 \circ f) \circ g = (\alpha_2 \circ f ) \circ g \implies \alpha_1 \circ (f \circ g = id) = \alpha_2 \circ (f \circ g = id) \implies \alpha_1 = \alpha_2$.

Conversely, suppose that $f : A \rightarrow B$ is a epimorphism. Suppose that f isn't surjective. That is there exists $b \in A$ : there exists no $a \in A$ that gets mapped to b. Pick arbitrarily $\alpha_1 : B \rightarrow A$, then construct $\alpha_2 : B \rightarrow A$ to be the same as $\alpha_1$, except at point b, that is $\alpha_1(b) \neq \alpha_2(b)$. Then, $\alpha_1 \circ f = \alpha_2 \circ f$, however $\alpha_1 \neq \alpha_2$. Contradiction. $\square$

2.4: Prove that "isomorphism" is an equivalence relation on any sets.

$A \cong A$ since, id : A \rightarrow A obtained by $id(a) = a$ is trivially a bijection. Thus, the notion of bijection is reflexive.

Suppose that $A \cong B$ and $B \cong C$, then there exists bijections $f : A \rightarrow B$ and $g : B \rightarrow C$. Thus, $g \circ f : A \rightarrow C$ is a bijection from A to C. Thus, $A \cong C$. Thus, the notion of bijection is transitive.

Suppose $A \cong B$, then there exists a bijection $f : A \rightarrow B$. Since, f is a bijection, so it has a unique inverse $f^{-1} : B \rightarrow A$. Moreover, this inverse $f^{-1}$ is a bijection from B to A. Thus $B \cong A$. Therefore, the notion of bijection is symmetric. $\square$

2.6: With the notation as in example 2.4 explain how any function $f : A \rightarrow B$ determines a section of $\pi_A$.

Recall, surjective functions induce right inverses, and we call those right inverses sections.

Recall, Let A and B be sets. Then, there are natural projections $\pi_A$ and $\pi_B$:

\begin{equation*}
\xymatrix{
& A \times B \ar[dl]_{\pi_A} \ar[dr]^{\pi_B} &  \\
A& & B
}
\end{equation*}

Thos are defined by $$\pi_A((a,b) := a$$
$$\pi_B((a,b)) := b$$

Both maps are clearly surjective. Given any function $f : A \rightarrow B$ construct a section $g : A \rightarrow A \times B$ of $\pi_A$ as follows, $a \mapsto (a,f(a))$. This is a section since $(\pi_A \circ g)(a) = a \ \forall \ a \in A$. Thus any function $f : A \rightarrow B$, determines a section of $\pi_A$. $\square$

2.7: Let $f : A \rightarrow B$. Prove that the graph $\Gamma_f$ of f is isomorphic to A.

Recall, $$\Gamma_f := \{(a,b) \ : \ b = f(a)\}.$$

Consider $\theta : A \rightarrow \Gamma_f$ given by $a \mapsto (a,f(a))$, then it is obvious that this gives an isomorphism.

Let $m = (c,f(c)) \in \gamma_f$, then $\theta(c) = m$. Therefore, $\theta$ is surjective.

Suppose that $$(a_1,f(a_1)) = (a_2,f(a_2)) \implies a_1 = a_2 \ and \ f(a_1) = f(a_2).$$

Thus, $\theta$ is injective as $a_1 = a_2$. $\square$

More questions and solutions to come for this section.

2.8: Describe as explicitly as you can all terms in the canonical decomposition of the function $\mathbb{R} \rightarrow \mathbb{C}$ defined by $r \mapsto e^{2 \pi r}$.

Recall given any $f : A \rightarrow B$ we have the following decomposition
$$A \twoheadrightarrow A/\sim~ \rightarrow im(f) \hookrightarrow B$$

Here the map $A/\sim~ \rightarrow im(f)$ is a isomorphism.  Recall $a_1 \sim a_2 \iff f(a_1) = f(a_2)$.
Therefore given $\psi : \mathbb{R} \rightarrow \mathbb{C}$ given by $r \mapsto e^{2\pi r}$. We have $r_1 \sim r_2 \iff e^{2\pi r_1} = e^{2\pi r_2} \iff r_2 = r_1 + 2\pi k$ for some $k \in \mathbb{Z}$.

Therefore $A/~ \cong S^{1}$ that is $A/~$ is a circle. This exercise matches exercise 1.6.

2.9: Show that if $A^{\prime} \cong A^{\prime \prime}$ and $B^{\prime} \cong B^{\prime \prime}$, and further $A^{\prime} \cap B^{\prime} = \emptyset$ and $A^{\prime \prime} \cap B^{\prime \prime} = \emptyset$, then $A^{\prime} \cup B^{\prime} \cong A^{\prime \prime} \cup B^{\prime \prime}$. Conclude that the operation $A \coprod B$ is well defined up-to-isomorphism.

Since $A^{\prime} \cong A^{\prime \prime} \implies$ there exists a bijection $f : A^{\prime} \rightarrow A^{\prime \prime}$. Similiarly since $B^{\prime} \cong B^{\prime \prime} \implies$ there exists a bijection $g : B^{\prime} \rightarrow B^{\prime \prime}$. Construct $\phi : A^{\prime} \cup B^{\prime} \rightarrow A^{\prime \prime} \cup B^{\prime \prime}$ as follows:

$c \mapsto f(c) \ if \ c \in A^{\prime}$ and $c \mapsto g(c)\ if\ c \in B^{\prime}$. Since $A^{\prime} \cap B^{\prime} = \emptyset$, so c can't be simulatenously in both $A^{\prime}$ and $B^{\prime}$, so $\phi$ is well defined. We have to prove it is a bijection.

Suppose $\phi(c_1) = \phi(c_2)$, then either $c \in A^{\prime}$ or $c \in B^{\prime}$. As $\phi$ is well defined we have $f(c_1) = f(c_2)$ or $g(c_1) = g(c_2)$ then from the fact that f and g are bijection then we have $c_1 = c_2$.

Suppose that $d \in A^{\prime \prime} \cup B^{\prime \prime}$ then $d \in A^{\prime \prime}$ or $d \in B^{\prime \prime}$, but it can't be in both. Suppose $d \in A^{\prime \prime}$, since f is bijection it has unique inverse $f^{-1}$. Consider the element $f^{-1}(d) \in A^{\prime}$, then $\phi(f^{-1}(d)) = f(f^{-1}(d)) = d \implies f$ is surjective. $\square$

2.10: Show that if A and B are finite sets, then $|B^{A}| = |B|^{|A|}$

Suppose $f : A \rightarrow B$. For each $a \in A$ there are $|B|$ choices to assign the value of the function at that point. Then there is total of $|B|^|A|$ choices for assigning specific functions from A to B. $\square$

2.11: In view of exercise 2.10, it isn't unreasonable to use $2^{A}$ to denote the set of functions from arbitrarily set A to a set with 2 elements say $\{0,1\}$. Prove that there is a bijection between $2^{A}$ and the power set of A.

Consider the map $\psi : P(A) \rightarrow 2^{A}$ defined by $B \mapsto I_B : A \rightarrow \{0,1\}$, where $I_B$ is defined as follows $I_B(a) = 1$ if $a \in B$ and $I_B(a) = 0$ if $a \notin B$.  Claim the map $\psi$ above is a bijection.

Suppose $f \in 2^{A}$. Then construct $B \in P(A)$ as follows:

$x \in B \iff f(x) = 1$. Then, $\psi(B) = f$. It is clear that $\psi$ is injective. $\square$

## Saturday, January 7, 2017

### Chapter 1 Section 1 Allufi chapter 0 problems and solutions

1.2)Prove that if $\sim$ is a relation on S, then the corresponding family $P_{\sim}$ defined in 1.5 is indeed a partition of S: That is, its elements are non-empty disjoint, and their union in S.

Proof:

Recall $[a] := \{b \in S : b \sim a\}$ and $P_{\sim} := \{ [a] : a \in S\}$.
$\bigcup_{[a] \in P_{\sim}} [a] = \bigcup_{a \in S} [a] = S$, since for each $a \in S$ we have atleast $a \sim a$, so each element in S is in some equivalence relation. This also covers all equivalence relations. Also, for all $a \in S$, $[a] \subset S$.

Since $a \sim a$ for all $a \in S \implies [a] \neq \emptyset$ for any $[a] \in P_{\sim}$ as atleast $a \in [a]$.

Suppose that $[a]$ and $[b] \in P_{\sim}$ such that $q \in [a]\ and\ [b]$.

This means $q \sim a$ and $q \sim b \implies a \sim q$ and $q \sim b \implies a \sim b \implies [a] \subset [b]$. The other inclusion is similiar. Thus, $[a] = [b]$.

1.3)Given a partition $\mathcal{P}$ on a set S, show how to define a relation $\sim$ on S such that $\mathcal{P}$ is the corresponding partition.

Proof:

Suppose that $\mathcal{P}$ is a partition on the set S. That is $\mathcal{P}$ is a collection of disjoint subsets of S, whose union is all of S.

Define the following relation:
$s_1 \sim s_2 \iff s_1 \ and \ s_2$ belong to the same set in $\mathcal{P}$. We have to verify that this relation that we defined is an equivalence relation; moreover, it gives $\mathcal{P}$ as a partition.

$s_1 \sim s_1 \forall \ s \in S$, since it belong to some $U_{s_1} \in \mathcal{P}$ as the elements of $\mathcal{P}$ cover S, so there must exist atleast one $U$ such that U contains $s_1$. We denote it by $U_{s_1}$.  Suppose that $s_1 \sim s_2$, then it is obvious that $s_2 \sim s_1$ by definition of how we defined this relation.

Suppose that $s_1 \sim s_2$ and $s_2 \sim s_3$, then $s_1$ and $s_2 \in U_1$ for some $U_1 \in \mathcal{P}$. Similiarly, $s_2, s_3 \in U_2$ for some $U_2 \in \mathcal{P}$. By the fact that $\mathcal{P}$ is a parition $U_1 = U_2$ because, otherwise $U_1 \cap U_2 \neq \emptyset$ and $U_1 \neq U_2$, which contradicts the fact that $\mathcal{P}$ is a partition for S.

We have to show now that $P_{\sim} = P$. Suppose that $[a] \in P_{\sim}$, then $b \in [a] \iff b \ and \ a$ is in some set $U \in \mathcal{P}$. Moreover, this U is unique as the elements of $\mathcal{P}$ are disjoint. This means $[a] = U$ for some $U \in \mathcal{P}$. This mean $[a] \in \mathcal{P}$. The other inclusion is trivial.

1.4) How many different equivalence relation maybe defined on the set $\{1,2,3\}$ ?

Mainly this question is asking how many different partitions for $\{1,2,3\}$.
Therefore, we have to count how many unique subsets are possible. Here is one way to do it.
We have the following types of subsets for S.
Type 1) $\{\star \}$.
Type 2)$\{\star ,\star \}$.
Type 3)$\{\star ,\star ,\star \}$.

To construct a subsets, whose union cover S and is disjoint. Then, we have that if we choose which type we want. If we choose type 1, then there is only one possibility. That is $P_1 = \{ \{1,2,3\} \}$.  If we choose type 2, then the other element in the set must be of type 1, so we for each type 2 we have 3 possibilities, since once we choose a type 2, then the other subset can't be of type 2 or 3. Hence it must be of type 1.  More concretely we have:

$$P_2 = \{ \{1\},\{2,3\}\ ,\ P_3 = \{ \{2\},\{1,3\},\ or \ P_4 = \{ \{3\},\{1,2\} \}.$$

Finally, if we choose two singeleton, then we must choose our third element to be a singleton.
That is, $$P_5 = \{ \{1\},\{2\},\{3\} \}.$$

Thus, there are 5 different equivalence relation on the set $\{1,2,3\}$.

1.5)Give an example of a relation that is reflexive and symmetric, but not transitive. What happens if you attempt to use this relation to define a partition on the set.

Fix a particular set S. The hint provides the answer that we are looking for. Suppose that $\mathcal{P}$ is composed of collection of subsets of S, whose union is all of S, such that atleast two of the elements of $\mathcal{P}$ are not disjoint. That is there exists $U_1,U_2 \in \mathcal{P}$, such that $U_1 \neq U_2$ and $U_1 \cap U_2 \neq \emptyset$.

Define a relation $\sim$ on S as follows:

$s_1 \sim s_2 \iff s_1,\ s_2$ belong to the same set of $\mathcal{P}$.  That is there exists $U \in \mathcal{P}$ such that $s_1,\ s_2 \in U$.

Claim: $\sim$ is reflexive, symmetric, but not transitive.

It is obvious that $\sim$ is reflexive and symmetric. Since there exists $U_1,U_2 \in \mathcal{P}$ such that $U_1 \cap U_2 \neq \emptyset$. Therefore there exists $q_1 \in U_1 \cap U_2$. Since $U_1 \neq U_2 \implies$
$$\exists\ q_2 \in U_1 \ : q_2 \notin U_2.$$
$$\exists\ q_3 \in U_2 \ : q_3 \notin U_1.$$

Then, $q_2 \sim q_1$ and $q_2 \sim q_3$, but $q_2 \not\sim q_3 \implies \sim$ isn't transitive.

If we try to use $\sim$ to define a partition we would get that the partition elements are not disjoint, but they are non-empty and covers S.

1.6: Define a relation $\sim$ on the set $\mathbb{R}$ of real numbers by setting $a \sim b \iff b - a \in \mathbb{Z}$. Prove that this is an equivalence relation, and find a compeling description for $\mathbb{R}/ \sim$. Do the same for the equivalence relation $\approx$ on the plane $\mathbb{R} \times \mathbb{R}$ defined by declaring $(a_1,a_2) \approx (b_1,b_2) \iff b_1 - a_1 \in \mathbb{Z} \ and \ b_2 - a_2 \in \mathbb{Z}$.

Proof:

First we shall prove that this is an equivalence relation. $a \sim a$, since $a - a = 0 \in \mathbb{Z} \implies \sim$ is reflexive.  Suppose $a \sim b \implies a - b \in \mathbb{Z}$. This means that $a - b = c$ for some $c \in \mathbb{Z} \implies (b - a) = - (a - b) = -c \in \mathbb{Z}$. Therefore, $b \sim a$. Thus, $\sim$ is symmetric.

Finally, suppose that $a \sim b$ and $b \sim c \implies a - b \in \mathbb{Z} \ and \ b - c \in \mathbb{Z}.$ Thus, $a - b = d$ and $b - c = m$ for $d,m \in \mathbb{Z}$.

Here, if we take the unit interval $[0,1]$ we can see that every element of $\mathbb{R}$ is in an equivalence class of this interval, and 0 is identified to 1. Thus, $\mathbb{R}/ \sim$ represents a circle.

For the relation $\approx$ it follows same logical pattern as for $\sim$. This relation represents the Torus.

## Thursday, January 5, 2017

### Examples of categories with universal properties

In this post, we will discuss categories with universal property. We recall that categories with a universal property are one which has a terminal object. We will also see a connection between mathematical constructs which might seem not related at first, but by the power of category theory we will see that they are indeed related.

Recall, from set theory we have the direct product of two sets. This will be a final object in a certain category. In general, we can define the product of two objects in a general category $\mathcal{C}$; however, it might or might not exist.

For the definition to make sense we have have to introduce the category $\mathcal{C}_{A,B}$. Suppose that $A$ and $B$ are objects in a general category. We will construct $\mathcal{C}_{A,B}$ as follows:

1)$Obj(\mathcal{C}_{A,B})$ = morphisms defined by the following diagram below:

\begin{equation*}
\xymatrix{
& A \\
Z \ar[ur]^{h} \ar[dr]_{g} \\
& B
}
\end{equation*}

One can think of this think of the objects of this category are the 3-tupe $(Z,h,g)$. That is, the objects of this category are actually two morphisms from arbitrarily object Z into fixed A and B respectively.

2)Morphisms are commutative diagram given by the following:
\begin{equation*}
\xymatrix{
& & A \\
Z_1 \ar@/^/[urr]^{h_1} \ar@/_/[drr]_{g_1} \ar[r]^{\sigma} & Z_2 \ar[ur]^{h_2}\ar[dr]_{g_2} \\
& & B
}
\end{equation*}

Now, that we defined the category $C_{A,B}$. We can define the product of two objects $A$ and $B$ in a specific category.

Definition 1:

Suppose that $\mathcal{C}$ is a category such that A and B are arbitrary objects. Then, the product denoted by $A\times B$(if exists) is defined to be the final object if it exists in the category $\mathcal{C}_{A,B}$. That is, given any object $(Z_1,h_1,g_1) \in Obj(\mathcal{C}_{A,B})$, there exists a unique morphism a morphism $\theta : Z_1 \rightarrow A\times B$ such that the diagram below commutes:

\begin{equation*}
\xymatrix{
& & A \\
Z_1 \ar@/^/[urr]^{h_1} \ar@/_/[drr]_{g_1} \ar[r]^{\theta} & A\times B \ar[ur]^{h_2}\ar[dr]_{g_2} \\
& & B
}
\end{equation*}

In the category of Sets. Product do indeed exist. We shall prove this.
Theorem 1:

Suppose that $\mathcal{C} = Set$. The regular product $A\times B$ is final object in the category $\mathcal{C}_{A,B}$.

Proof:

Consider the following diagram:
\begin{equation*}
\xymatrix{
& A \\
A \times B \ar[ur]^{\pi_A} \ar[dr]_{\pi_B} \\
& B
}
\end{equation*}

Here, $\pi_A$ and $\pi_B$ are the regular projection maps from the product into A and B respectively.  We would like to show that given any object $(Z,h_1,g_1)$, then there exists unique $\theta$ that makes the diagram below commutes.

\begin{equation*}
\xymatrix{
& & A \\
Z \ar@/^/[urr]^{h_1} \ar@/_/[drr]_{g_1} \ar[r]^{\theta} & A\times B \ar[ur]^{\pi_A}\ar[dr]_{\pi_B} \\
& & B
}
\end{equation*}

Consider $\theta = h_1 \times h_2 : Z \rightarrow A\times B$ defined as $\theta(z) = (h_1(z),h_2(z))$. Then, this makes the diagram above commutes. Also, it is unique as this definition is forced by the commutativity of the diagram above. $\square$

If we consider the category $(\mathbb{Z},\leq)$. Then, we would like to see if this category has a product or not. Suppose that it does indeed exist for now. Suppose m and n are integers. Denote their product as $m \times n$. Then, if the product exist we must have that $m \times n \leq n,m$. Also, if there exists another integer $s$ such that $s \leq m$ and $s \leq n$, then $s \leq m \times n$. Then, $m \times n = min(m,n)$. Thus, category theory says that min and the product are really related to each other.

Now, we will explore the dual aspect of products. In the category of Sets, we can define the notion of being disjoint union. Is it possible to generalize this notion to arbitrarily categories ? The answer to that question is yes.

To do that, we will construct a new category which is really the dual. This category is obtained by reversing the arrows of the category $\mathcal{C}_{A,B}$. However, let us define it formally for completeness sake. Suppose that $\mathcal{C}$ is a category such that $A$ and $B$ are objects in this category. Construct a new category, which we will denote by $\mathcal{C}^{A,B}$.

$Obj(\mathcal{C}^{A,B})$ = morphisms given by the diagram below:
\begin{equation*}
\xymatrix{
& A \ar[dl]_{h} \\
Z\\
& B \ar[ul]^{g} \\
}
\end{equation*}

We can think of the objects as the tuple $(h,g,Z)$. I wrote this differentially than tha category $C_{A,B}$, to differentiate that the arrows are emanating from A and B not from Z.

Morphisms: Are given by the following commutative diagram:
\begin{equation*}
\xymatrix{
& & A \ar[dl]^{h_2} \ar@/_/[dll]_{h_1} \\
Z_1 & Z_2 \ar[l]_{\theta} \\
& & B \ar[ul]_{g_2}  \ar@/^/[ull]^{g_1}
}
\end{equation*}

That is we can think of the morphisms between $(h_2,g_2,Z_2)$ and $(h_1,g_1,Z_1)$, as a morphism between $Z_2$ and $Z_1$ in $\mathcal{C}$ such that the commutative diagram above is satisfied.

Recall, in sets we can form the disjoint union between two Sets. We can generalize this notion to arbitrary category.

Definition 2:

Suppose $\mathcal{C}$ is a category such that A and B are arbitrary objects in this category. The coproducts(if exists) is the initial object in the category $A,B$. We shall denote the coproducts as $A \coprod B$. That is given any object $(h,g,Z)$, there exists a unique morphism $\theta$ from $A \coprod B$ to $Z$, such that the diagram below commutes:

\begin{equation*}
\xymatrix{
& & A \ar[dl]^{h} \ar@/_/[dll]_{h_1} \\
{A \coprod B} & Z \ar[l]_{\theta} \\
& & B \ar[ul]_{g}  \ar@/^/[ull]^{g_1}
}
\end{equation*}

In is left as exercise to the reader that the product is indeed an initial object in the $\mathcal{C}_{A,B}$ with $\mathcal{C}$ being the category of sets. As, there are many ways to form disjoint union, so this will right away tells us that all those ways of forming disjoint union are isomorphic to each other. This is the power of category theory. In the category $(\mathbb{Z},\leq)$, one can check that the coproduct in this category correspond to taking the maximum.

Now, we are completely done with chapter 1 of allufi summary. In the next few posts, we will solve all the problems in chapter 1. Keep posted.

## Tuesday, January 3, 2017

### Categories and special morphism between objects within a category.

In this post, I will give an introduction to category theory. In layman's terms, we can think of a category as a system of Mathematical objects, where there is a way to relate any two objects in the system in a nice way.

Let us dive right into the definition of a category means:

Definition 1:

$\mathcal{C}$ is said to be a category if it consists of:

(1)A class of $Obj(\mathcal{C})$ of objects in a category.
(2)For any objects $A,B \in Obj(\mathcal{C})$, there exists a set $Hom_{\mathcal{C}}(A,B)$ with the following properties :

i)Given any $f \in Hom_{\mathcal{C}}(A,B)$ and $g \in Hom_{\mathcal{C}}(B,C)$, then there exists $h := g \circ f \in Hom_{\mathcal{C}}(A,C)$.  More formally, There is a function (of sets)

$$Hom_{\mathcal{C}}(A,B) \times Hom_{\mathcal{C}}(B,C) \rightarrow Hom_{\mathcal{C}}(A,C).$$

We shall denote the image of the pair $(f,g)$ by $g \circ f$.

ii)For any objects, $A \in Obj(\mathcal{C})$ there exists, at least one morphism denoted by $1_A \in Hom_{\mathcal{C}}(A,A)$, and is called the identity on A. Moreover, It is identity with respect to morphism composition :

$$f \circ \mathcal{1}_A = f, \mathcal{1}_B \circ f = f.$$
iii)$Hom_{\mathcal{C}}(A,B) \cap Hom_{\mathcal{C}}(C,D) = \emptyset$.

Most of the time, in categories unlike sets, there is no defined notion of what it mean to be an element of a particular object. That is why category theory is sometimes called abstract nonsense. It is important to remember that abstract nonsense isn't something derogatory, but it refers to the fact that we don't really "know" what is inside each object; however, we do know how objects interact with each other in a category.

A very familiar category is the category of Sets. In fact, we can think of the category of sets in order to recall the definition of a category. We will give examples of categories and prove that they are indeed categories to get our hands dirty and get more familiar with the definition. Whenever the category is understood we shall drop the subscript $\mathcal{C}$. In fact, in the next few examples, we shall do that.

Examples of categories:

Suppose that $S$ is a set with a relation $\sim$ on S that is transitive and reflexive. Define a new category $\mathcal{C}_{S/\sim}$ as follows:

1)$Obj(\mathcal{C}_{S/ \sim}) = S$
2)For any two objects $s_1,s_2 \in S$. If $s_1 \sim s_2$, then we define $Hom(s_1,s_2) = (s_1,s_2)$, otherwise $Hom(s_1,s_2) = \emptyset$.
We have to define how to compose two morphism f and g and see whether the conditions for morphism compositions are satisfied. Suppose $f \in Hom(s_1,s_2)$ and $g \in Hom(s_2,s_3)$. Then, necessarily $f = (s_1,s_2)$ and $g = (s_2,s_3)$. By definition, we have that the following: $$s_1 \sim s_2$$
$$s_2 \sim s_3.$$
Since $\sim$ is transitive, then $g \circ f := (s_1,s_3)$.  We have to verify that the composition as defined is associate. Suppose that $f = (s_1,s_2)$,$g = (s_2,s_3)$,and $h = (s_3,s_4)$. We have to verify that $$h \circ (g \circ f) = (h \circ g) \circ f.$$
By definition of morphism composition as defined:
$$g \circ f = (s_1,s_3)$$
$$h \circ g = (s_3,s_4).$$
Thus, we have $$h \circ (g \circ f) = (s_1,s_4) = (h \circ g) \circ f.$$
A specific instance of this category is $S = \mathbb{Z}$ and $\sim\ = \ \leq$. A diagram in a category is a sequence of morphism between objects in a category. An example of a diagram in this category is:

\begin{equation*}

\xymatrix{
1 \ar[d] \ar[r] \ar[dr] & 2 \ar[d] \ar[ld] \\
3 \ar[r] & 4
}
\end{equation*}

This category will be important to us, as this will provide us with alot of counter examples. More on that later.

The following definition gives a formalism for the question of what does it mean for two objects to be similiar in a category.

Definition 2:
Suppose $\mathcal{C}$ is a category with A and B as objects in this category.

1)$f \in Hom(A,B)$ is said to be an isomorphism iff there exists $g \in Hom(B,A)$ such that
$$f \circ g = 1_A \ \ \& \ \ g \circ f = 1_B.$$
2)Two objects $A$ and $B$ are said to be isomorphic iff there exists an isomorphism f between them.

We might want to generalize the notions of a function being surjective and injective to categories. Since, categories in general don't have elements, so there is only 1 sensible way of doing that.

Definition 3:
Suppose $\mathcal{C}$ is a category with A and B as objects in this category.

1)A morphism $f \in Hom(A,B)$ is said to epic if the following condition occurs:
For all objects $Z \in Obj(\mathcal{C})$ and for all morphism $\alpha_1,\alpha_2 : B \rightarrow D$ such that $\alpha_1 \circ f = \alpha_2 \circ f \implies \alpha_1 = \alpha_2$.

1)A morphism $f \in Hom(A,B)$ is said to monic iff the following condition occurs:
For all objects $U\in Obj(\mathcal{C})$ and for all morphism $\beta_1,\beta_2 : U \rightarrow A$ such that $f \circ \beta_1 = f \circ \beta_2 \implies \beta_1 = \beta_2$.

In the category of sets. A function is epic iff it is surjective and a function is monic iff it is injective. It follows that a function that is both epic and monic is an isomorphism. Hence, we might conjecture that if we have a morphism which is both epic and monic, then it must be an isomorphism. However, the category $(\mathbb{Z},\leq)$ is a category in which every morphism is both epic and monic, but the only isomorphism is between an object and itself. It is good exercise to check the details for this.

Now, we talk about an important aspect of category theory, which really shows why category theory is important. That is universal property. First, we need the following definition:

Definition 4:

Fix a particular category $\mathcal{C}$.

1)An object $\mathbb{I}$ is said to be initial object in a category iff for every object $B \in Obj(\mathcal{C})$ : $Hom(\mathbb{I},B)$ is a singleton. That is for every object $B$ in our category, there is a unique morphism from $\mathbb{I}$ to $B$.
2)An object $\mathbb{F}$ is said to be final object in a category iff for every object $B \in Obj(\mathcal{C})$ : $Hom(B,\mathbb{F})$ is a singleton. That is for every object $B$ in our category, there is a unique morphism from $B$ to $\mathbb{F}$.

We can use this notion to show to define universal property. This will show again and again in our blog. It will show up when we start solving problems from Allufi chapter 0, as well as when we start going towards more advanced things in Allufi chapter 0.

Definition 5:

Fix a particular category $\mathcal{C}$.

A category $\mathcal{C}$ is said to have universal property iff it has either an initial or a final object.

We shall denote initial/final objects as terminal objects. When we want to clarify if it is whether it is initial or final, then we will mention that.  A category doesn't have to have a unique terminal object. However, if a terminal objects exist, and aren't unique, then necessarily they are isomorphic.

Theorem 1:
Fix a particular category $\mathcal{C}$.

1)If  $\mathbb{F}_1$ and $\mathbb{F}_2$ are final objects, then $\mathbb{F}_1 \cong \mathbb{F}_2$.
2)If  $\mathbb{I}_1$ and $\mathbb{I}_2$ are initial objects, then $\mathbb{I}_1 \cong \mathbb{I}_2$.

We shall prove this when we start working on the problems from Allufi chapter 0.  This post gives an introduction to universal properties. In the next post, we give some examples and we will see amazing connections between direct products of two sets and minimum of two numbers.  We will also see connections between disjoint union and maximum of two numbers. Moreover, we will see that direct products and coproducts can be considered as being opposite/dual of each other (whatever that means).

## Saturday, December 24, 2016

### Categories preliminaries

Before we start talking about category theory.  We will talk about Sets in this post; however, in the next few posts we will generalize the notion to arbitrarily category and give some examples.

Normally, when we talk about a function $f : A \rightarrow B$, the properties of f, whether it is injective or surjective is given by elements. In this section, we will develop a more mature way of thinking about injectivity and surjective, and show that any function has some canonical decomposition. This different mode of thinking will enable us to gain some intuition when we start working with categories.

To introduce this mature way of thinking we need the following theorem:

Theorem 1:
(1)A function $f : A \rightarrow B$ is injective iff it has a left inverse
(2)A function $f : A \rightarrow B$ is surjective iff it has a right inverse.

We can see that theorem 1 allows us to define injectivity, and surjectivity not in terms of elements but in terms of whether a specific map exist. We can do better than this we can define injectivity, and surjectivity by how f interact with other maps, that is one can think of it of how the map f handle itself against other maps.

Definition 1:
A function $f : A \rightarrow B$ is said to be a monomorphic iff for every set $Z$ and for every map $\alpha_1,\alpha_2 : Z \rightarrow B$ such that $f \circ \alpha_1 = f \circ \alpha_2 \implies \alpha_1 = \alpha_2$.

Definition 2:
A function $f : A \rightarrow B$ is said to be a epimorphic iff for every set $U$ and for every map $\beta_1,\beta_2 : B \rightarrow U$ such that $\beta_1 \circ f = \beta \circ f \implies \beta_1 = \beta_2$.

We can see from the above definitions that monomorphic and epimorphic functions are ones that interact with special maps in some way. In the category of sets we can say more than that.

Theorem 2:

1)A function $f : A \rightarrow B$ is injective iff f is monomorphic.
2)A function $f : A \rightarrow B$ is surjective iff f is epimorphic.

Proof:
(1)

Suppose that $f$ is injective. Suppose that Z is a set and $\alpha_1,\alpha_2 : Z \rightarrow A$ such that $f \circ \alpha_1 = f \circ \alpha_2$ since f has a left inverse this mean that $\alpha_1 = \alpha_2$. Thus, f is a monomorphic map.

Conversely, suppose that f is a monomorphic map. Suppose that $a_1,a_2$ are arbitrarily elements of A such that $f(a_1) = f(a_2)$.

Let $Z = \{a_1,a_2\}$ and let $\alpha_1 : Z \rightarrow A$, $\alpha_2 : Z \rightarrow A$ be defined as:

$\alpha_1(a_1) = a_2$                               $\alpha_2(a_1) = a_1$
$\alpha_1(a_2) = a_1$                               $\alpha_2(a_2) = a_2$.

Then, one can check that $$f \circ \alpha_1 = f \circ \alpha_2$$
Thus $\alpha_1 = \alpha_2$. This means that $a_1 = a_2$, so f is injective.

(2)
Suppose that $f$ is surjective. We can use similar argument as (1) to prove that f is an epimorphic function.  Here we use the fact that f has a right inverse.

Conversely, suppose that f is an epimorphic function. We shall argue by contradiction. Assume that f is not surjective, so there exists $b \in B$ such that there is no $a \in A$ that gets mapped to it. Then Suppose we set $U = B$. Suppose we have a fixed map $\beta_1 : B \rightarrow A$. Construct $\beta_2 : B \rightarrow A$ to be the same as $\beta_1$ except at the point $b$. That is $\beta_1(b) \neq \beta_2(b)$. Then we have: $$\beta_1 \circ f = \beta_2 \circ f$$. However, by construction: $$\beta_1 \neq \beta_2$$. This gives us a contradiction. $\square$

Theorem 1 above says that we can think of injectivity and surjectivity by how it interacts with maps. We don't care about elements, or whether it has a specific map which satisfies certain properties. This will be very useful for us later when we explore categories.

The next theorem will tell us that any function has a very specific decomposition. This will be useful to us when we start talking about talking about first isomorphism theorem.

Theorem 2:

Suppose we have an arbitrarily function $f : A \rightarrow B$. Then, f can be decomposed as follows
$$A \twoheadrightarrow A/\sim \rightarrow Im(f) \hookrightarrow B.$$

The equivalence relation $\sim$ is defined as $a_1 \sim a_2 \iff f(a_1) = f(a_2)$. Essentially, what this theorem says is that any function can be realized as a surjection followed by an isomorphism followed by an injection.

Proof:

The first map is the projection map given by $a \mapsto [a]$. The second map is defined as $[a] \mapsto f(a)$. The third map is just the inclusion map. So, all together we have $a \mapsto [a] \mapsto f(a) \mapsto f(a)$. Therefore, we just need to check if the middle map is an isomorphism. Denote that map by $\theta$.

Suppose that $b \in Im(f)$. Then, by definition there exists $a \in A$ such that $f(a) = b$. Consider the element given by $[a]$. Then, by definition, $$\theta([a]) = f(a) = b.$$ Thus it is surjective.

Suppose that $\theta([a_1]) = \theta([a_2] \iff f(a_1) = f(a_2) \iff [a_1] = [a_2]$. Hence, $\theta$ is injective. $\square$.