Algebra Part I

I have ideas for a couple posts I want to write, but unfortunately, they both will require some level of familiarity with abstract algebra, and I don’t want to just assume the reader has the necessary prereq and then go on writing them. Instead, I’ve given myself the ambitious 1 goal of introducing most of the relevant algebra (spoiler: 2) in a series of blog posts beginning with this one on group theory.

Bit of a Disclaimer
I can’t possibly mention everything on a particular subject in one post, and I am not a particular fan of writing insanely long posts, so some things have to be cut. In particular, I aim to introduce most of the important topics in each subject without necessarily doing a deep dive, and while I will try to mention specific examples of things, I won’t spend too much time looking at them closely. It will be up to you to take the time to make sure the example makes sense. Because of this, I’ll try to include exercises that should be good checks of understanding. Finally, as always, things are presented according to my tastes and according to whatever order they happen to pop into my head; hence, they are not necessarily done the usual way.

What’s a group?

The natural place to start is with the definition of a group. Broadly speaking, groups follow two important themes in mathematics. These are the idea that, in math, we like to study collections of object that possess some kind of structure, and the idea that symmetry is often beneficial to doing mathematics. With that said, a group is intuitively a collection of symmetries 3 where you can think of a symmetry as some action 4 on an object that preserves it’s shape. We’ll see this more explicitly with our first example of a group.

Before we get to a formal definition, let’s look at , the group of symmetries of a square. These are all the actions you can perform on a square that leave it visually unchanged. To help make sense of what they are, we’ll visualize them using a square with labelled vertices.

Above is our square starting out. The simplest symmetry we can apply to it is the “do nothing” symmetry. However, that’s not a particularly exciting thing to do, so we spend a little more time thinking about how we can reposition the square, and decide to try rotating it 90 degrees (counterclockwise). After thinking about it some more, we realize we could also flip the square about it’s main diagonal (from 1 to 3)

Now the interesting thing happens when we try to compose these. What if we rotate and then flip (left image), or flip and then rotate (right image)?

The first thing we notice is that these images are different, so order of symmetris matter. The second thing we may notice is that the image on the left is the previous right image (the flip) rotated 270 degrees, while the image on the right is the previous left image (the rotation) flipped across the other diagonal. Letting denote a rotation, denote a flip across the main diagonal, and denote a flip across the other diagonal, symbolically, we have

where denote the result of first applying and then applying . Now, there is more that can be said about this group, but I’ll leave the exploring to you.

From the above example, we can see some properties we might want in order to call something a collection of symmetries (i.e. a group). First, there should be a “do nothing” symmetry that just leaves things unchanged. Second, symmetries don’t necessarily have to commute (that is, in general), but should always be composable. I didn’t highlight these last two, but we should also expect that applying symmetries one after the other always gets you the same thing as long as you apply them in the same order (e.g. and should be the same), and we want to be able to undo an symmetry. Putting all these together, we get the following defintion.

A group is a set together with an operation satisfying the following for all

If it turns out that additionally, for all , then is called an abelian group 5

That’s all there is to it. 3 simple conditions, and we’ll see that groups exhibit some well-behaved properties 6. Most of the time when talking about a specific group, the underlying operation will be understood, and so I’ll just refer to the group as instead of . Furthermore, the group operation is usually denoted instead of cause mathematicians are lazy. If the group is abelian, then is also common. Now that we have a definition, try to answer as many of these as you can.

Are the following groups, abelian groups, or neither?

  • $D_8$
  • $(\Z,+)$
  • $(\Z,*)$
  • $(\R,+)$
  • $(\R,*)$
  • $(\R-\{0\},*)$
  • $(M_2(\R),+)$ the set of matrices under addition
  • $(\Q-\{0\},*)$
  • $D_{2n}$ the set of symmetrices of a regular -gon under composition
  • $(2\Z, +)$ the even numbers under addition
  • $(\Z_{12}, +)$ the integers mod 12 under addition
  • $(\Z_7-\{0\}, *)$ the integers mod 7 (except 0) under multiplication
  • the empty set under any operation you like
  • {e} the singleton consisting of only the identity

Basic Properties of Groups

Alright, now that we know what a group is, let’s see some of the benefits of studying them. The most obvious one is generality. If we show something is true for an arbitary group , then we automatically know it’s true for the integers or the reals or matrices or what have you. So to start off, let’s prove some basic facts about groups.

The identity element of a group is unique

Pf: Let $$G$$ be a group, and suppose that both $$e$$ and $$f$$ are identity elements. That is, for any $$a\in G$$, we have $$ae=a=ea$$ and $$af=a=fa$$. Hence, the theorem follows from
$$\begin{align*} e=ef=f \end{align*}$$ where we used the fact that $$f$$ is the identity on the left equality, and the fact that $$e$$ is the identity on the right equality. $$\square$$

The above theorem maybe isn’t too surprising. It basically says that there’s only one way to do nothing. The next theorem maybe also isn’t surprising either.

Let be an element of a group. Then, has a unique inverse.

Pf: Exercise for the reader

Now that we know that inverses are unique, we’ll denote the inverse of by (or if is abelian). We’ll see a couple more proofs, and then we’ll get a look at something maybe a little less abstract.

Theorem (Socks and Shoes 7)
You put socks on before wearing shoes, but you have to remove your shoes before you can remove your socks. Symbollically, in a group , for any , we have

$$\begin{align*} (ab)(b^{-1}a^{-1}) &= a(bb^{-1})a^{-1}\\ &= aea^{-1}\\ &= aa{-1} \\ &= e \end{align*}$$
Since inverse are unique, we must have $$(ab)^{-1}=b^{-1}a^{-1}$$.$$\square$$

If and are groups, then we can form the group of pairs of elements of and with product

Pf: Exercise to the reader. Verify the group properties.

Note that the above group is called the direct product of and . It is sometimes also denoted , and called their direct sum. These notions are the same here, but differ if you start talking about the direct product/sum of infinite collections of groups; however, we won’t get into that here.

Structure of Groups

Let’s return to our example. Recall that denotes rotation by 90 degrees and denotes a flip about the main diagonal. A fact that I will not prove here, but that you can spend some time convincing yourself of, is that is generated by and in the following sense.

Let be a group and be a subset of . Then, we say is generated by if every elment of is some (finite) product of elements of . Furthermore, if is finite 8, then we say that is finitely generated.

The remark about the definition amounts to saying that any symmetry of a square is really just the result of a bunch of flips and rotations. I mention this just out of curiousity. I had planned on using it as justification in giving a more explicity description for , but unfortunately, the description would be longer than I’m willin to type out, so let’s look at a different group instead.

Out new star group is which is the integers under addition (mod 4). This group is special for a few reasons, but most of these are a result of it being generated by 1 element which motivates the following definition.

A group is called cyclic if it is generated by a single element. That is, it is cyclic there exists some s.t. every can be written in the form for some integer .

In order to better understand ’s structure, we will look at its “multiplication” table.

Now, let’s consider the group . This is the integers (mod 10) that are coprime to 10 under multiplication. Hence, it’s multiplication table looks like

Now, you look at this and ask “what’s the point?” cause we’re just looking at some random tables. However, an important observation is that things like and are just symbols. It doesn’t really matter how we call the elemnts of the group; all that’s important is how they relate to each other. As an exercise in this way of thinking let’s relabel these tables using the following mappings

If we do that and remake the tables we will get the following for

Repeat this process for and look at the table you get. If everything went as planned, you will see you ended up with exactly the same table. Since we said that the important thing was not the symbols but how they interacted with each other, this gives us complete justification in saying that, in some sense, and are the same group. This is an idea we will now make rigorous via the notion of structure-preserving maps.

Let and be two groups. A homomorphism or group map is a function with the property that for any , we have . If furthermore, is injective then we call it an embedding of into , and it is bijective, then it is called an isomorphism and we say that and are isomorphic groups and denote this .

In essence, homomorphisms let us relate the structures of two groups by saying that they are doing something similar. If the homomorphism is injective, then it is essentially saying that a copy of lives inside of . An example of this is the following

It is not hard to verify that this is an injective homomorphism, so it lets us realize as sitting inside of in the form . By looking at their multiplication tables, we showed earlier that . To get a better handle on homomorphims and see why they are the natural type of function to consider for groups, we’ll prove some quick theorems

Let be a homomorphism. Let be the identities of and , respectively. Then,

Pf: Fix any $$g\in G$$ so $$f(g)=f(ge_G)=f(g)f(e_G)$$. If we multiply both sides (on the left) by $$f(g)^{-1}$$, then this equation becomes $$e_H=f(g)^{-1}f(g)=f(g)^{-1}f(g)f(e_G)=f(e_G)$$ $$\square$$

Let be a homomorphism. Then, for any and , we have .

Pf: We will show that $$f(a^{-1})=f(a)^{-1}$$ and the theorem will follow from an easy induction argument I won't bother doing. This case is an immediate consequence of $$f(a)f(a^{-1})=f(aa^{-1})=f(e_G)=e_H$$$$\square$$

Let be a group and fix some . The function defined by is a bijection, but not a homomorphism.

Pf: Exercise for the reader

Let be an isomorphism. Then, is also an isomorphism.

Pf: Exercise for the reader

is an equivalence relation on the class of groups.

Pf: Exercise for the reader


When introducing embeddings, I mentioned that they give us a way of viewing one group inside of another. This idea is formalized view the notion of subgroups which are pretty much exactly what they sound like.

Let be a subset of a group . We say is a subgroup of , denoted , if itself is a group.

Let be a homomorphism. Then, is a subgroup of . Furthermore, if is injective, then .

Pf: Let $$I:=f(G)$$. To show that $$I$$ is a group, we need to show it contains the identity, has inverses, and it's multiplicatin is associative. Well, $$f(e_G)=e_H$$ and $$f(e_G)\in I$$ by definition, so we're good there. Furthermore, any element of $$I$$ can be written in the form of $$f(g)$$ for some $$g\in G$$. Hence, $$f(g^{-1})=f(g)^{-1}$$ is in $$I$$ as well, so we have inverses. Associativity follows from the fact that $$I$$'s multiplication is $$H$$'s multiplication, and it's associative. Finally, $$f\mid_I$$ is surjective by definition and so an isomorphism when $$f$$ is injective. $$\square$$

One thing worth mentioning is that while the definition of a subgroup requires that it first be a subset, we usually ignore this part and call any embaddable into a subgroup of . Now that we have this notion of a subgroup, let’s see what we can do with it.

Theorem (2-step subgroup test)
Let be a non-empty subset of a group. Then, is a subgroup of if it is closed under multiplication, and contains inverses for each element.

Pf: Multiplication on $$H$$ inherits associativity from multiplication on $$G$$, and it has inverses by assumption. The group operation of $$H$$ is well-defined since it is closed, so the only thing to verify is that $$H$$ contains the identity. $$H$$ is non-empty so pick some $$h\in H$$. By assumption $$h^{-1}\in H$$ as well. Since $$H$$ is closed under multiplication, $$hh^{-1}\in H$$ so $$H$$ contains the identity. $$\square$$

Theorem (1-step subgroup test)
Let be a non-empty subset of . Then, is a subgroup of if for all , we have as well.

Pf: Exercise for the reader

Let be a group and fix some element . We say is the cyclic group generated by .


Pf: Pick any two elements, $a^n,a^m\in\gen a$. Then, $(a^n)(a^m)^{-1}=(a^n)(a^{-m})=a^{n-m}\in\gen a$. Furthermore, $\gen a$ is visibly non-empty, so it's a subgroup by the 1-step test. $\square$

So as an example, in , the multiplies of . This brings up a good source of confusion. When considering as a group under addition, is not raised to the th power, but instead times . Luckily, is abelian so this is normally written as , but just wanted to clarify.

One important notion in group theory, that unfortunately plays a minor role in this post, is that of the order of an element. For a group , the order of the group is simply its size. For some element, its order, denoted $|a|$ is the smallest positive exponent $n$ s.t. . If no such exists, then is said to have infinite order. For a finite group, every element has some finite order (why?). Calling this the order of is justified by the following.


Note that a (finite) cyclic group is one where the order of some element is the order of the group. Furthermore, since I didn’t make this an exercise before, show that any cyclic group is abelian.

Let $f:G\rightarrow H$ be a group homomorphism. We define the kernel of $f$ as the set of elements mapped to the identity.

Let be a group homomorphism. Then, $\ker f\le G$

Pf: Exercise for the reader

Since we now know some stuff about homormorphisms and subgroups, the majority of what follows will focus on proving two main theorems: Langrange’s Theorem and the First Isomorphism Theorem. After that, I will mention something that will be useful for one of the things I wanna talk about in a future post.


The goal for this section is to find a way to generalize modular arithmetic to arbitrary groups. In modular arithmetic we can say things like $7\equiv4\pmod3$ when $(7-4)|3$, but if you generalize divison to groups in the obvious way, you don’t get anything useful; you’d end up with any two elements being equivalent as long as you modded out by something non-zero (why?). Because of this, instead of building off of division, we will follow the idea that modding out by $3$ is a way of treating $3$ as being $0$; this choice will manifest itself in an important placed on kernels.

Let be a subgroup. We say that is normal if it is the kernel of some homomorphism. We denote this .

Returning to our example, from the perspective of $3$ being $0$ (i.e. $3\Z$ being normal), this equivalence is really expressing that $7=4+3\equiv4+0=4$. We can take this a step further by writing and $4=1+1*3$ which makes it apparent that they are equivalent because they are both $1$ more than a multiple of $3$. In the context of general groups, if we are going to treat some subgroup as being $0$, then any elements that are a fixed amount more than members of the subgroup should similarly be considered equivalent.

Let be a subgroup, and fix some element . A (left) coset of is a set of the form

Prove or disprove. For any subgroup and element , we have that where is a right coset.

Hopefully you did the exercise. It turns out to be false in general, but miraculously, it is true for normal subgroups.

Let be a normal subgroup. Then, every left coset of is also a right coset (and vice versa).

Pf: Let $f:G\rightarrow K$ be a homomorphism with $\ker f=H$, and fix any $a\in G$. Then, an arbitrary element of $aH$ has the form $ah$ where $h\in H$. Note that $ah=aha^{-1}a$. To complete the proof we see that $f(aha^{-1})=f(a)f(h)f(a)^{-1}=f(a)ef(a)^{-1}=e$ so $aha^{-1}\in\ker f=H$ and so $ah=(aha^{-1})a\in Ha$. This shows that $aH\subseteq Ha$. The other direction can be shown analagously, so $aH=Ha$ for all $a$ which is more that sufficient to prove the claim. $$\square$$

The above theorem goes to show that normal subgroups are indeed special, and as it turns out, the converse of the theorem above is true as well 9, so this gives another way of characterizing normal subgroups. Now, recall that we want to come up with some notion of “modding out by a subgroup”, and so we want a way of saying when two elements of the big group are equivalent. We defined cosets with the idea that all of their members should be equivalent, and so the following shouldn’t be surprising.

Let be a (not necessarily normal) subgroup of . Then, the relation defined by iff is an equivalence relation.

Pf: We need to show that $\sim$ is reflexive, symmetric, and transitive. It's obviously reflexive and symmetric, so we'll focus on transitivity. Suppose $x\sim y$ and $y\sim z$. Then, $xH=yH=zH$ so we have $x\sim z$, and we're done. $$\square$$

The cosets of partition

Now, we can prove our first major result of the post.

Theorem (Lagrange)
Let be a subgroup. Then, divides

Pf: Pick two cosets $aH$ and $bH$ of $G$. Then, the map $ah\mapsto (ba^{-1})ah$ is injective (result of every element having an inverse), and you get a similar injective map from $bH\rightarrow aH$. Thus, $|aH|=|bH|$ so all cosets have the same order (size). Since to cosets of $H$ partition $G$, assuming there are $k$ such cosets, we have $|G|=k|H|$. $$\square$$

The order of an element of a group divides the order of the group.

Every group of prime order is cyclic

Most everything after the definition of a coset wasn’t strictly needed for this, but is still good to know. We finally say what it means to mod out by a subgroup.

The index of in is the number of (left) cosets of in and is denoted or .

Let $H\trianglelefteq G$ be a normal subgroup. We define the quotient group $G/H$ to be the set of left cosets of $H$ together with the multiplication operation $(aH)(bH)=(ab)H$.

When we mod a subgroup, we treat elements of the same coset as being equivalent, so instead of operating on individual elements, we operate on cosets instead. In practice, we can usally find a nice group that a quotient is isomorphic to, and so work with it instead of the quotient directly. This way, we can have quotients but still deal with elements instead of cosets. As a few examples

Now that we have this definied, we need to show that this is a good definition.

Multiplication of the quotient group is well-defined. That is, if and , then .

Pf: Suppose $$H\trianglelefteq G$$ and pick $$x,x',y,y'\in G$$ s.t. $$xH=x'H$$ and $$yH=y'H$$. Then, $$(xy)H=x(yH)=x(Hy)=(xH)y=(x'H)y=x'(Hy)=x'(Hy')=x'(y'H)=x'y'H$$. $\square$

The proof was short, but notice that we could only switch between left and right cosets so freely because we assumed that was normal. If is not normal, then this theorem is false. Now that we have well-definedness, the real crucial thing to show is up to you.

Let . Then, is a group, and the quotient map defined by is a surjective homomorphism with $\ker f=H$.

Pf: (important) exercise to the reader.

Prove that a subgroup is normal iff for all . After this, prove that this is the case iff for all .

Prove that for an abelian group, every subgroup is normal.

Let be a normal group of finite index, and fix any . Then, .

Pf: Let $$f:G\rightarrow G/H$$ be the quotient map, and note that $$|G/H|=[G:H]$$. Thus, $$f(a)=a+H$$ has order dividing $$[G:H]$$ by Lagrange, so $$f(a^{[G:H]})=(a+H)^{[G:H]}=H$$ so $$a^{[G:H]}\in\ker f=H$$. $$\square$$

Diagrams et al.

In this section, I’m gonna include some pictures, but they’ll be different from the type of images I usually include. Here, I’ll use so-called commutative diagrams. A diagram is a collection of objects (i.e. groups) with direct arrows (i.e. homomorphisms) drawn between them that makes it easier to discuss things when you have several functions going between different groups. We say such a diagram commutes when any path along arrows from one group to another gives the same result 10. This will make more sense when we see some.

I mentioned before that we would prove the first isomorphism theorem. Instead of proving it directly, we will derive it as a corollary of a better theorem in my opinion.

Factor Theorem
Let be a homomorphism, and let . Then, there exists a unique homomorphism such that factors through in the sense that the following diagram commutes

That is, is the composition of and the quotient map. Furthermore, is injective iff , and is surjective iff is surjective.

Pf: Let $$f,G,H,K$$ be as in the statement of the theorem. We first want to show that there is a unique $$h:G/K\rightarrow H$$ such that $$h(xK)=f(x)$$ for all $$x\in G$$. Well, define $$h$$ based solely off of this equation. Every element of $$G/K$$ is of the form $$xK$$ and $$f(x)$$ is unique given a choice of $$x$$, so this gives a unique satisying $$h$$. We now need to make sure that $$h$$ is well-definied (the fact that it's a homomorphism follows from $$f$$ being one), so pick $$g,g'\in G$$ s.t. $$gK=g'K$$. Then, $$g^{-1}g'\in K$$ so $$f(g')=f(gg^{-1}g')=f(g)f(g^{-1}g')=f(g)\implies h(gK)=h(g'K)$$ so we're good. For the statements about injectivity and surjectivity, convince yourself that a homomorphism is injective iff it's kernel is trivial, and that $$\image(h)=\image(f)$$. $$\square$$

Corollary (First isomorphism theorem)
Let be a surjective homomorphism. Then, .

Pf: By the above theorem, $$f$$ must factor through some map $$g:G/\ker f\rightarrow H$$. Furthermore, this map must be injective and surjective since we're factoring through the full kernel and $$f$$ was surjective. Thus, we have an isomorphism. $$\square$$

Lagrange’s Theorem and the first isomorphism theorem are two of the big, foundational theorems for group theory, and we’ve now proven both of them. Normally, this would be a good place to stop, but there’s one last thing I want to quickly introduce 11.

Consider a sequence of groups and homomorphisms between them

$$\begin{CD} G_1 @>f_1>> G_2 @>f_2>> G_3 @>f_3>> G_4 @>f_4>> \dots @>f_{n-1}>> G_n \end{CD}$$
We say such a sequence is exact if for all . In particular, a short exact sequence is an exact sequence of the form
$$\begin{CD} \{e\} @>>> N @>f>> G @>g>> H @>>> \{e\} \end{CD}$$
where is the trivial group.

The first time I saw exact sequences, all I could think was, “Why? Who cares?” At first glance, they seem pretty artificial, but they actually give a compact way of codifying some information about how groups are related to each other. Let’s look at the short exact sequence appearing in the above definition for example. The fact that the sequence is exact that says that the image of the incoming map (which must send the trivial element to the identity in ) is the kernal of . This is just the statement that or equivalently that is injective! Similarly, exactness at says that the image of is the kernel of the map sending all of to the identity, so and is surjective! Finally, exactness at says that . Since we know is injective, this means we can embed in as a normal subgroup. Furthermore, since is surjective, the first isomorphism theorem tells us that , so we get the sense that is somehow made up from and (the simplest example is , and you can easily pick $f,g$ to form a short exact sequence in this case, but other choices of $G$ may work too). Because of this observation, we make our final definition.

We say is an extension of by 12 if there exists a short exact sequence

$$\begin{CD} \{e\} @>>> N @>f>> G @>g>> H @>>> \{e\} \end{CD}$$

  1. ambitious because historically speaking I’m bad at sticking with these kinds of things 

  2. I imagine one post of group theory, one on rings/fields, and then one on noetherian rings and dedekind domains 

  3. whatever that means 

  4. you won’t see this much here, but it’s important to keep in mind that groups often do perform some action on an object, and studying these group actions can lead to good math. 

  5. not in this algebra sequence, but at some point I hope to give reason to why this isn’t the most appropriate name, and these things should really be called Z-modules instead 

  6. not in this algebra sequence, but at some point I hope to give reason to why actually groups in general are really ugly and do some pathological things 

  7. Socks and Sandals if you prefer 

  8. This is not quite the definition of finitely generated, but works in almost all (actually, maybe all. I don’t know of a counterexample) cases 

  9. This might be more apparent after we define quotient groups 

  10. Basically every diagram you see in the wild will be commutative 

  11. Before I forget to mention this, exercise: look up the like 3 other isomorphism theorems. Also, there’s a decent amount of group theory you’d usually learn leading up to some of the stuff I’ve mentioned here that I didn’t bring up at all. 

  12. Some people say G is an extension of H by N instead. Doesn’t really matter 

comments powered by Disqus