Jekyll2018-09-19T06:01:18+00:00https://nivent.github.io/Thoughts of a ProgrammerNiven Achenjang's Personal WebsiteAbsolute Values and p-adics2018-09-19T00:00:00+00:002018-09-19T00:00:00+00:00https://nivent.github.io/blog/abs-val-p-adic<p>A potentially better title for this post might be “A Brief Introduction to Local Fields.” In it, I plan to introduce absolute values, proving some surprising facts about non-archimedean absolute values, and ending with a characterization of so-called local fields in characteristic 0. I’ll try to focus more on topological/analytic aspects <sup id="fnref:5"><a href="#fn:5" class="footnote">1</a></sup>, so this post will be less algebraic than most of my others.</p>
<h1 id="absolute-values">Absolute Values</h1>
<div class="definition">
Let $D$ be an integral domain. An <b>absolute value</b> $\nabs:D\to\R_{\ge0}$ is a function satisfying
<ol>
<li> $\abs x=0\iff x=0$ </li>
<li> $\abs{xy}=\abs x\abs y$ </li>
<li> $\abs{x+y}\le\abs x+\abs y$ </li>
</ol>
3. above is called the <b>triangle inequality</b>.
</div>
<div class="remark">
What I called an absolute value is sometimes instead called a "norm," and an "absolute value" is usually then a function satisfying 1. and 2., but 3. is replaced by the condition that $\abs{x+y}\le c\max(\abs x,\abs y)$ for some $c\ge1$.
</div>
<p>Absolute values gives us a way of measuring the size of elements of our ring. While I defined the notion for general integral domains, we usually only consider absolute values on fields. In fact, for $D$ a domain with fraction field $F$, any absolute value $\nabs:D\to\R_{\ge0}$ extends to one on $F$ via</p>
<script type="math/tex; mode=display">\abs{\frac xy}=\frac{\abs x}{\abs y}</script>
<p>This is easily checked to satisfy 1., 2., and 3. In addition, there are other immediate properties of absolute values.</p>
<div class="proposition">
Let $\nabs:D\to\R_{\ge0}$ be an absolute value. Then,
<ul>
<li> $\abs{\pm1}=1$. More generally, $\abs\zeta=1$ whenever $\zeta\in D$ is a root of unity. </li>
<li> $\abs n\le n$ for all $n\in\Z_{\ge0}$ </li>
</ul>
</div>
<p>So, what should we keep in mind as our goto examples of absolute values? Well, one obvious choice is the normal absolute value on $\Q$ where $|x|=x$ if $x\ge0$ and $|x|=-x$ otherwise. We will denote this absolute value by $\nabs_\infty:\Q\to\R_{\ge0}$. In addition to this, there is always a trivial absolute value where $|x|=1$ for all $x\in D$. Both of these examples are kinda boring <sup id="fnref:9"><a href="#fn:9" class="footnote">2</a></sup>, so another good one is</p>
<div class="example">
Fix a prime $p$. The <b>$p$-adic absolute value</b> $\nabs_p:\Q\to\R$ is given by
$$\abs{\frac xy}_p=\frac1{p^{v_p(x)-v_p(y)}}$$
when $x,y\neq0$ (where $v_p(x)$ is the exponent of the largest power of $p$ dividing $x$), and $\abs0_p=0$.
</div>
<p>For example, when $p=3$, we have $\abs{75/19}_ 3=1/3$ since $3\nmid19$ and $3\mid75$ but $9\nmid75$. This may seem like a weird way to measure size at first, but if you stop and think about it, this is saying, for example, that $n\in\Z$ is really small whenever its divisible by a large power of $p$; in other words, if we can write $n\equiv0\pmod{p^e}$ for $e$ large, then $n$ is small. In essence, this absolute value gives us an analytic way to study congruence relations. Speaking of analysis, absolute values turn rings into metric spaces.</p>
<div class="definition">
Fix a set $X$. A function $d:X\by X\to\R_{\ge0}$ is called a <b>metric</b> if it satisfies
<ol>
<li> $d(x,y)=0\iff x=y$ </li>
<li> $d(x,y)=d(y,x)$ </li>
<li> $d(x,y)+d(y,z)\ge d(x,z)$ </li>
</ol>
When $d$ is a metric, the pair $(X,d)$ is called a <b>metric space</b>.
</div>
<div class="theorem">
Let $\nabs:D\to\R_{\ge0}$ be an absolute value. Then, we can define a metric $d:D\by D\to\R_{\ge0}$ on $D$ via
$$d(x,y)=\abs{x-y}$$
</div>
<div class="proof4">
First,
$$d(x,y)=0\iff\abs{x-y}=0\iff x-y=0\iff x=y$$
For 2.,
$$d(x,y)=|x-y|=|-1||x-y|=|y-x|=d(y,x)$$
Finally,
$$d(x,y)+d(y,z)=|x-y|+|y-z|\ge|(x-y)+(y-z)|=|x-z|=d(x,z)$$
so we win.
</div>
<div class="remark">
Just a reminder: in the $p$-adic metric on $\Z$, two integers $x,y$ are close (i.e. $d(x,y)$ is small) whenever $x\equiv y\pmod{p^e}$ for $e$ large.
</div>
<p>Having a metric is great for many reasons; one of them is that a metric induces the metric topology <sup id="fnref:1"><a href="#fn:1" class="footnote">3</a></sup>. Given a metric space $(X,d)$, the metric topology has a basis consisting of open balls $B(x,r)=\brackets{y\in X\mid d(x,y)< r}$. Our primary application of this induced topology is the following definition.</p>
<div class="exercise">
Let $(X,d)$ be a metric space. Show that the (closed) ball $\bar B(x,r)=\brackets{y\in X:d(x,y)\le r}$ is a closed subset of $X$ (i.e. its complement is a union of open balls)
</div>
<p><span class="exercise">
Let $\nabs:D\to\R_{\ge0}$ be an absolute value on a domain $D$. Given $D$ the topology induced by $\nabs$. Prove that $\nabs$ is a continuous function to $\R_{\ge0}$ with the Euclidean topology. <sup id="fnref:2"><a href="#fn:2" class="footnote">4</a></sup>
</span></p>
<div class="definition">
Let $\nabs,\nabs':D\to\R_{\ge0}$ be two absolute values on a domain $D$. We say that they are <b>equivalent</b>, denote $\nabs\sim\nabs'$ if the topologies they induce on $D$ coincide. i.e. they are not just homeomorphic, but are literally the same as sets.
</div>
<div class="theorem">
Let $\nabs,\nabs':F\to\R_{\ge0}$ be two absolute values on a field $F$. Then, they are equivalent if and only if there exists some $t>0$ s.t. $\nabs^t=\nabs'$.
</div>
<div class="proof4">
$(\impliedby)$ It suffices to show that every open ball of one is an oben ball of the other and vice versa. This is the case since
$$B(x,r)=\brackets{y\in F:|x-y|< r}=\brackets{y\in F:|x-y|'< r^t}=B'(x,r^t)$$
$(\implies)$ Note that while every open subset of $F$ w.r.t $\nabs$ is open w.r.t $\nabs'$, a priori not every open ball w.r.t $\nabs$ is an open ball w.r.t $\nabs'$. We will first show that $\abs x<1\iff\abs x'<1$. Fix $x\in F$ s.t. $\abs x<1$. Then, $\lim|x^n|=0$, so every open set $U\subset F$ containing $0$ contains a tail of (the primage of) this sequence (e.g. $\exists N\ge1$ such that $n\ge N\implies x^n\in U$). However, $\nabs,\nabs'$ share the same open sets, so we get that $\lim x^n=0\in F$ in the $\nabs'$ topology as well. By continuity of $\nabs'$, this means that $\lim\abs{x^n}'=0$ so clearly $\abs{x}'<1$. By considering inverses, we immediately get $\abs x>1\iff\abs x'>1$ as well. Similarly, we have $\abs x=1\iff\abs x'=1$. To finish, for any $x\in F$ with $\abs x>1$, let
$$t(x)=\frac{\log{\abs x'}}{\log{\abs x}}$$
By construction, $\abs x^{t(x)}=\abs x'$. We claim that $t(x)=t(y)$ for all $x,y\in F$ with $\abs x,\abs y>1$. Suppose otherwise. Fix such $x,y$ with $t(x)\neq t(y)$. We can find $m,n\in\Z$ such that
$$\frac{\log{\abs x'}}{\log{\abs y'}}<\frac mn<\frac{\log{\abs x}}{\log{\abs y}}$$
where $m,n\in\Z$. However, this implies that
$$\abs{\frac{x^n}{y^m}}'<1\text{ and }\abs{\frac{x^n}{y^m}}>1$$
which is impossible. Thus, $t=t(x)$ is independent of $x$. For $0<|x|<1$, we have
$$\frac1{\abs x^t}=\abs{\frac1x}^t=\abs{\frac1x}'=\frac1{\abs x'}\implies\abs x^t=\abs x'$$
so the claim holds.
</div>
<div class="definition">
A <b>place</b> on a field $F$ is an equivalence class of absolute values on $F$.
</div>
<p>The point of the above definition is that we shouldn’t focus on individual absolute values, but instead on places. With that said, what types of properties (of absolute values) are well-defined on places? Perhaps unsurprisingly, one of the most important such properties is Archimedeaness.</p>
<div class="definition">
Let $\nabs:D\to\R_{\ge0}$ be an absolute value (i.e. a representative of a place). Then, we say $\nabs$ is <b>archimedean</b> if $\abs{\Z}\subset\R$ is unbounded, and we call it <b>non-archimedean</b> otherwise.
</div>
<div class="theorem">
An absolute value $\nabs:D\to\R_{\ge0}$ is non-archimedean if and only if it satisfies the <b>ultrametric inequality</b>: $\abs{x+y}\le\max(\abs x,\abs y)$.
</div>
<div class="proof4">
$(\impliedby)$ this direction is obvious.<br />
$(\implies)$ Let $\nabs$ be a non-archimedean absolute value on a domain $D$. Fix $C\in\Z$ large enough so that $\abs n< C$ for all $n\in\Z$. Then, for any $n\in\Z$, we have
$$\begin{align*}
\abs{x+y}^n &=\abs{\sum_{k=0}^n\binom nkx^ky^{n-k}}\\
&\le\sum_{k=0}^nC\max(\abs x,\abs y)^n\\
&= nC\max(\abs x,\abs y)^n\\
\abs{x+y} &\le (nC)^{1/n}\max(\abs x,\abs y)
\end{align*}$$
The bottom inequality holds for all $n$, so taking the limit as $n\to\infty$ gives the desired result.
</div>
<div class="corollary">
In a non-archimedean absolute value, $\abs n\le1$ for all $n\in\Z$.
</div>
<div class="exercise">
Prove that two equivalent absolute values are either both archimedean or both not.
</div>
<div class="example">
The Euclidean absolute value $\nabs_\infty:\Q\to\R$ is archimedean, while the $p$-adic absolute values $\nabs_p:\Q\to\R$ are not.
</div>
<h1 id="non-archimedean-geometry">Non-Archimedean Geometry</h1>
<p>In this section, we’ll explore some properties of non-archimedean places. We will see that, in some ways, non-archimedean places are much nicer than their archimedean counterparts. Throughout the section, fix a field $F$ with a non-archimedean place represented by $\nabs:F\to\R_{\ge0}$.</p>
<div class="theorem">
Every triangle in $F$ is isosceles.
</div>
<div class="proof4">
Consider a triangle with points at $0,x,y\in F$. It has side lengths $\abs x,\abs y,\abs{x-y}$. Our goal is to show that two of these are equal. Suppose that $\abs x>\abs y$. We claim that, in this case, $\abs x=\abs{x-y}$. This is because
$$\abs x=\abs{(x-y)+y}\le\max\parens{\abs{x-y},\abs y}\implies\abs x\le\abs{x-y}$$
where the implication comes from the assumption that $\abs x>\abs y$. Furthermore,
$$\abs{x-y}\le\max\parens{\abs x,\abs y}=\abs x$$
so we have $\abs x=\abs{x-y}$ as claimed.
</div>
<p>If you look at the proof, we actually proved something slightly stronger. Not only is every triangle isosceles, but its always the longest side length that appears (at least) twice.</p>
<div class="theorem">
Every point of a circle in $F$ is the center.
</div>
<div class="proof4">
Consider a circle $B=B(x,r)\subset F$, and fix any $y\in B$. Our goal is to show that $B(x,r)=B(y,r)$. For any $z\in F$ with $\abs{x-z}< r$, we have
$$\abs{y-z}=\abs{(y-x)+(x-z)}\le\max(\abs{y-x},\abs{x-z})< r$$
so $z\in B(y,r)$. Since $x\in B(y,r)$, by symmetry this shows that $B(x,r)=B(y,r)$.
</div>
<div class="corollary">
The (closed) ball $\bar B(x,r)=\brackets{y\in F:\abs{x-y}\le r}$ is open.
</div>
<div class="proof4">
For any $y\in\bar B(x,r)$, we have $B(y,r)=B(x,r)\subset\bar B(x,r)$.
</div>
<p>However, not everything is better in non-archimedean land. Non-archimedean fields have weird topological properties.</p>
<div class="theorem">
The topology on $F$ is totally disconnected. i.e. the only connected nonempty sets are points.
</div>
<div class="proof4">
Let $C\subset F$ be a set with two distinct points $x,y\in C$. Fix some $0 < r < \abs{x-y}$ and write
$$C=\parens{C\cap\brackets{z\in F:\abs{x-z}\le r}}\sqcup\parens{C\cap\brackets{z\in F:\abs{x-z}>r}}$$
as the union of two nonempty open (in $C$) sets. This means that $C$ is not connected.
</div>
<p>For the last theorem, I’ll need to introduce the notion of a complete metric space. This is a metric space “without any holes.” The idea is that any sequence that looks like it should have a limit <sup id="fnref:3"><a href="#fn:3" class="footnote">5</a></sup> actually does have a limit.</p>
<div class="definition">
Let $(X,d)$ be a metric space. A sequence $\brackets{a_n}\subset X$ in $X$ is called <b>Cauchy</b> if for any $\eps>0$, there exists some $N\in\N$ such that
$$d(a_n,a_m)<\eps$$
whenever $n,m\ge N$.
</div>
<div class="exercise">
Show that any convergent sequence is Cauchy.
</div>
<div class="example">
Consider the sequence $(3,3.1,3.14,3.141,3.1415,\dots)$ consisting of truncations of the digits of $\pi$. As a sequence in $\Q$, it is Cauchy but not convergent.
</div>
<div class="definition">
A metric space $(X,d)$ is called <b>complete</b> if every Cauchy sequence converges.
</div>
<p>Cauchy sequences look like they should converge; in the tail of the sequence, there’s barely any difference between one term and the next so you would expect that it eventually converges to some point. As the example above shows, this isn’t true in general, so complete spaces are extra nice. In particular, complete spaces coming from a non-Archimedean norm has a very simple criterion for convergence.</p>
<div class="lemma">
$\abs{x+y}\ge\abs{x}-\abs{y}$
</div>
<div class="proof4">
$\abs x=\abs{(x+y)-y}\le\abs{x+y}+\abs y$
</div>
<div class="theorem">
Assume that $F$ is complete as a metric space with the metric induced by $\nabs$. Let $\brackets{a_n}_{n\ge1}\subset F$ be a sequence in $F$. Then,
$$\sum_{n\ge1}a_n\text{ converges}\iff\lim\abs{a_n}=0$$
</div>
<div class="proof4">
$(\implies)$ Assume that $\sum_{n\ge1}a_n=L$. Then, for any $\eps>0$, there's some $N_{\eps}$ such that $\abs{\sum_{n\ge k}a_n}=\abs{L-\sum_{n=1}^ka_n}<\eps$ whenever $k\ge N_{\eps}$. Now, suppose that $\lim\abs{a_n}\neq0$, so there exists some $\delta>0$ such $\abs{a_n}>\delta$ infinitely often. Now, fix some $k\ge N_{\frac\delta2}$ such that $\abs{a_k}>\delta$. Then, $\abs{\sum_{n\ge k}a_n},\abs{\sum_{n>k}a_n}<\frac\delta2$, but
$$\abs{\sum_{n\ge k}a_n}\ge\abs{a_k}-\abs{\sum_{n>k}a_n}>\delta-\frac\delta2=\frac\delta2$$
where the first inequality above comes from the lemma. This is a contradiction, so we win.<br />
$(\impliedby)$ Now, assume instead that $\lim\abs{a_n}=0$, and let $S_n=\sum_{k=1}^na_k$ denote the $n$th partial sum. We will show that the sequence $S_n$ is Cauchy. This is because, assuming WLOG that $n\ge m$,
$$\abs{S_n-S_m}=\abs{\sum_{k=m+1}^na_k}\le\max\parens{\abs{a_{m+1}},\dots,\abs{a_n}}\longrightarrow0$$
Hence, in the tail of the sequence (of paritial sums), the terms become arbitarily close together, so the sequence is Cauchy. This gives the claim.
</div>
<div class="remark">
I'm not that strong in analysis, so there's probably a better proof of the $\implies$ direction than what I was able to come up with.
</div>
<p>Compared to the mess of convergence tests for series in $\R$, this is quite nice. However, the question still remains: where do these complete fields come from?</p>
<h1 id="constructing-complete-fields">Constructing Complete Fields</h1>
<p>In this section, we will give the idea behind the construction of complete fields such as $\Q_p$, the $p$-adic numbers. After that, we will show one way of writing down elements of $\Q_p$.</p>
<div class="theorem">
Let $\nabs:F\to\R_{\ge0}$ be an absolute value on a field $F$. Then there exists an isometry $\iota:F\to\wh F$ into a field $\wh F$ complete with respect to an absolute value $\wh\nabs:\wh F\to\R_{\ge0}$ such that any isometry $\iota':F\to K$ into a complete field factors through $\iota$. Furthermore, $\iota(F)\subset\wh F$ is dense, and $\wh F$ is unique upto (unique) isomorphism.
</div>
<p>The idea is to construct $\wh F$ as the set of Cauchy sequences $\brackets{a_n}$ in $F$ modulo the relation $\brackets{a_n}\sim\brackets{b_n}\iff\lim\abs{a_n-b_n}=0$. You then define multiplication, addition, subtraction, and division in the obvious ways <sup id="fnref:4"><a href="#fn:4" class="footnote">6</a></sup>, and endow it with the absolute value</p>
<script type="math/tex; mode=display">\abs{\brackets{a_n}}'=\lim\abs{a_n}</script>
<div class="definition">
The field $\wh F$ in the above theorem is called the <b>completion</b> of $F$ w.r.t $\nabs$.
</div>
<div class="remark">
You can use this theorem to construct $\R$ from $\Q$ via $\nabs_\infty$, but you cannot use this theorem to define $\R$. That is because this theorem (and even the notion of an absolute value) relies on preexisting construction of $\R$.
</div>
<p>We can, however, use this theorem to define the <b>$p$-adic numbers</b> $\Q_p$ which are the completion of $\Q$ with respect to the $p$-adic absolute value $\nabs_p$. We still use $\nabs_p$ to denote its extention to an absolute value on $\Q_p$.</p>
<p>The rest of this section will be devoted to constructing a canonical choice of representatives of elements of $\Q_p$. We first make the observation that passing from $\Q$ to $\Q_p$ does not change the image of $\nabs_p$. Formally,</p>
<div class="theorem">
Let $\brackets{a_n}\subset\Q$ be Cauchy. Then, the sequence $\abs{a_n}_p$ is eventually constant or $\lim a_n=0$.
</div>
<div class="proof4">
Suppose that $\lim a_n\neq0$, so there exists some $\eps>0$ such that $\abs{a_n}_p>\eps$ infinitely often. Furthermore, there is some $N\in\N$ such that
$$\abs{a_n-a_m}_p<\eps$$
whenever $n,m\ge N$. Fix an index $n\ge N$ such that $\abs{a_n}>\eps$. Then, for any $m\ge n$, because all triangles are isosceles (with largest side length appearing twice) and $\abs{a_n-a_m}_p<\eps$, we get that $\abs{a_n}_p=\abs{a_m}_p$, showing that $\abs{a_n}_p$ is eventually constant.
</div>
<div class="corollary">
$\abs{\Q_p}_p=\abs{\Q_p}$. That is, for all $x\in\Q_p$, $\abs x_p=p^n$ for some $n\in\Z$.
</div>
<div class="remark">
The above theorem (and corollary) hold anytime you complete a field with respect to a non-archimedean absolute value. Passing to the completion does not affect the set of possible absolute values. This is in contrast to, for example, the archimedean place $\nabs_\infty$ on $\Q$ since $\R$ has more possible absolute values than $\Q$.
</div>
<p>This fact will allow us to simply things slightly by shifting focus from $\Q_p$ to $\Z_p$ without losing too much information.</p>
<div class="definition">
For a prime $p$, the <b>$p$-adic integers</b> form the ring
$$\Z_p:=\brackets{q\in\Q_p:\abs q_p\le 1}$$
</div>
<div class="remark">
$\Z_p\sqbracks{\frac1p}=\Q_p$. Given some $q\in\Q_p$, we can write $\abs{q}_p=p^n$ for some $n\in\Z$. If $n\le0$, then $\abs{p^nq}_p=1$, so $a:=p^nq\in\Z_p$, and we can write
$$q=\frac a{p^n}\in\Z_p\sqbracks{\frac1p}$$
</div>
<p>The above remark is what I meant by not losing too much information; you only have to invert a single element to get from $\Z_p$ to $\Q_p$. Now, our goal is to show that</p>
<script type="math/tex; mode=display">% <![CDATA[
\Z_p=\brackets{\sum_{k\ge0}a_kp^k:0\le a_k< p} %]]></script>
<p>Since $\abs{a_kp^k}_ p=\abs{p^k}_ p\to0$, we know that any such series converges, so we need to show that every element of $\Z_p$ admits such a series representation. I’ll leave it as an exercise to show that these representations are unique. We’ll call a sum of the above form a <b>power series representation</b> of $x\in\Z_p$.</p>
<p>To do this <sup id="fnref:6"><a href="#fn:6" class="footnote">7</a></sup>, we’ll need to do some algebra. First, note that $p\Z_p=\brackets{n\in\Z_p:\abs n_p<1}=\Z_p\sm\units\Z_p$ is the unique maximal ideal of $\Z_p$. Furthermore, given some $x=\sum_{k\ge0}a_kp^k$, we can look at it $\bmod{p^n}$ to get the approximation $\sum_{k=0}^{n-1}a_kp^k$ which is just some integer in the set ${0,1,\dots,p^n-1}$. By making successive approximations like this, we can obtain power series representations for any element of $\Z_p$. To make this idea formal, we first need… <sup id="fnref:7"><a href="#fn:7" class="footnote">8</a></sup></p>
<div class="theorem">
Fix a positive integer $n\in\N$. Then, $\Z_p/p^n\Z_p\simeq\Z/p^n\Z$.
</div>
<div class="proof4">
We will actually show instead that $\Z_p/p^n\Z_p\simeq\Z_{(p)}/p^n\Z_{(p)}$. We leave it as an exercise to the reader to show that $\Z_{(p)}/p^n\Z_{(p)}\simeq\zmod{p^n}$.<br />
Note that $\Z_{(p)}=\brackets{q\in\Q:\abs q_p\le1}$. Furthermore, $\Z_{(p)}\hookrightarrow\Z_p$ and $p^n\Z_{(p)}\hookrightarrow p^n\Z_p$, so this inclusion descends to a map $f:\Z_{(p)}/p^n\Z_{(p)}\to\Z_p/p^n\Z_p$ which we claim is an isomorphism. Injectivity of $f$ is equivalent to the statement $p^n\Z_p\cap\Z_{(p)}=p^n\Z_{(p)}$. To show this, consider some $x\in p^n\Z_p\cap\Z_{(p)}$ so $\log_p\abs x_p\le-n$. Write $x=a/s$ where $p\nmid s$, so $\abs x_p=p^{-v_p(a)}$. Hence, $v_p(a)\ge n$ which is precisely the statement that $x\in p^n\Z_{(p)}$. Hence $f$ is injective. <br />
For surjectivity, fix any $y\in\Z_p$. We will constuct an $x\in\Z_{(p)}$ such that $x\equiv y\pmod{p^n\Z_p}$. Recall that $\Q$ is a dense subset of $\Z_p$, so we can choose some $q\in\Q$ with $\abs{q-y}_p< p^{-n}$. Note that
$$\abs x_p\le\max(\abs{q-y}_p,\abs y_p)\le1$$
so $x\in\Z_{(p)}$. Furthermore, since $\Z_p=\brackets{\nabs_p\le1}$, we have $p^n\Z_p=\brackets{\nabs_p\le\abs{p^n}_p}=\brackets{\nabs_p\le p^{-n}}$, so $x-y\in p^n\Z_p$. Thus, $x\equiv y\pmod{p^n\Z_p}$, proving surjectivity.
</div>
<div class="theorem">
Every element of $\Z_p$ has a power series representation.
</div>
<div class="proof4">
Fix some $x\in\Z_p$. We will inductively define the $\{a_n\}_{n\ge0}$ so that
$$x=\sum_{n\ge0}a_np^n$$
First, $a_0$ is the unique integer $0\le a_0< p$ such that $x\equiv a_0\pmod p$. Now, suppose we have defined $a_0,\dots,a_k\in\brackets{0,\dots,p-1}$ such that
$$x\equiv\sum_{n=0}^ka_np^n\pmod{p^{k+1}}$$
We want to define $a_{k+1}$. Then, let
$$x'=\frac1{p^{k+1}}\sqbracks{x-\sum_{n=0}^ka_np^n}\in\Z_p$$
Define $a_{k+1}$ to be the unique element of $\{0,\dots,p-1\}$ such that $x'\equiv a_{k+1}\pmod p$. Then,
$$\sum_{n=0}^{k+1}a_np^n\equiv\sum_{n=0}^ka_np^n+a_{k+1}p^{k+1}\equiv\sum_{n=0}^ka_np^n+x'p^{k+1}\equiv x\pmod{p^{k+2}}$$
so the theorem follows by induction.
</div>
<p>Since $\Q_p=\Z_p[1/p]$, as a corollary, we get that any $x\in\Q_p$ can be written</p>
<script type="math/tex; mode=display">% <![CDATA[
x=\sum_{n\ge-m}a_np^n\text{ where }0\le a_n< p\text{ and }m\in\Z %]]></script>
<div class="remark">
Since I didn't give all of the details of constructing completions in general. One could take the above as the definition of $\Q_p$ (i.e. as a set (or even a multiplicative monoid) $\Q_p\simeq\Q((p))$, the field of Laurent series in $p$). You would need to give formulas for addition and multiplication, and also say that
$$\abs{\sum_{n\ge-m}a_np^n}_p=p^m\text{ if }a_{-m}\neq0$$
This would be very ad hoc so one shouldn't do this, but one could.
</div>
<p>Note that, since the coefficients $a_n$ are all that matter in writing down an element of $\Q_p$, we can take a cue from how we usually write real numbers and define the notation</p>
<script type="math/tex; mode=display">\sqbracks{\dots a_2a_1a_0.a_{-1}a_{-2}\dots a_{-m}}_p:=\sum_{n\ge-m}a_np^n\in\Q_p</script>
<p>where the square brackets are unneeded when writing actual digits instead of $a_n$. Note that a $p$-adic expansions can be infinite to the left and must be finite to the right; this is the reverse of decimal expansions in $\R$.</p>
<div class="example">
For concreteness, let's write down some $p$-adic numbers.
<ul>
<li> $-1\in\Q_2$
$$\begin{align*}
-1 &= 1+1(2)+1(2)^2+1(2)^3+1(2)^4+\dots\\
&= \dots11111_2
\end{align*}$$
</li>
<li> $\sqrt{-1}\in\Q_5$
$$\begin{align*}
\sqrt{-1} &= 2+1(5)+2(5)^2+1(5)^3+3(5)^4\\
&= \dots31212_5
\end{align*}$$
</li>
<li> $\sqrt 2\in\Q_7$
$$\begin{align*}
\sqrt 2 &= 3+1(7)+2(7)^2+6(7)^3+1(7)^4+\dots\\
&= \dots16213_7
\end{align*}$$
</li>
</ul>
</div>
<p>I might explain how to calculate these expansions in a future post. For now, if you’re curious, look up <a href="https://www.wikiwand.com/en/Hensel%27s_lemma">Hensel’s lemma</a>.</p>
<h1 id="ostrowskis-theorem">Ostrowski’s Theorem</h1>
<p>To end, I’ll prove a theorem due to Ostrowski and state (but not prove) the characterization of so-called local fields in characterisitc 0. Ostrowski’s theorem states that the only completions of $\Q$ are the real numbers $\R$ and the $p$-adic numbers $\Q_p$.</p>
<div class="theorem" name="Ostrowski">
Let $\nabs:\Q\to\R_{\ge0}$ be a nontrivial absolute value. Then, $\nabs\sim\nabs_\infty$ or $\nabs\sim\nabs_p$ for some prime $p$.
</div>
<div class="proof4">
We'll start with the non-Archimedean case. Suppose that $\nabs n\le1$ for all integers $n\in\Z$. Fix a positive integer $n$ with $\abs n< 1$ and factor
$$n=\prod_{k=1}^gp_k^{e_k}$$
for some finite set of primes $\{p_k\}$. Clearly $\abs{p_k}< 1$ for some prime dividing $n$. We claim this is the case for exactly one prime. Suppose that $p,q$ are both primes with absolute value (strictly) less than 1. Well, fix exponents $e,f$ such that $\abs p^e,\abs q^f<\frac12$. Since they are coprime, we can find $x,y\in\Z$ such that $xp^e+yq^f=1$. However,
$$1=\abs1=\abs{xp^3+yq^f}\le\abs{p^e}+\abs{q^f}< 1$$
a contradiction. Hence, only one prime $p$ has absolute value less than one. Thus, in general, $\abs n=\abs p^{v_p(n)}$. Letting $t=-\frac{\log\abs p}{\log p}$, we see that $\nabs=\nabs_p^t$.<br />
Now for the Archimedean case. Suppose that $\abs{\Z}$ is unbounded. Let $a,b$ and $n$ be natural numbers with $a,b>1$. Write $b^n$ in base $a$
$$b^n=\sum_{k=0}^mc_ka^k\text{ where }c_k\in\{0,\dots,a-1\}$$
and $m\le n\log_ba+1$. Then,
$$\abs b^n\le am\max\parens{\abs a^{m-1},1}\le a(n\log_ab+1)\max\parens{\abs a^{n\log_ab},1}$$
Since this holds for all $n$, taking the $n$th root and the limit as $n\to\infty$ shows that
$$\abs b\le\max\parens{\abs a^{\log_ab},1}$$
Note that if $\abs b>1$, then $\abs a>1$ since otherwise we would have $\abs b\le1$. Thus, for any choice of $a,b>1$ we get $\abs b\le\abs a^{\log_ab}$. In other words,
$$\frac{\log\abs b}{\log b}\le\frac{\log\abs a}{\log a}$$
By symmetry, this is actually an equality, so there's some constant $t\in\R$ such that $\log\abs n=\lambda\log n$ for all $n\in\Z$. This means that $\abs n=n^\lambda=\abs n_\infty^\lambda$, so we win.
</div>
<p>So, despite the fact that we’re more accustomed to the Archimedean place $\nabs_\infty$ on $\Q$, the typical place on $\Q$ is non-Archimedean. With that said, what’s a local field?</p>
<div class="definition">
A <b>local field</b> is a field $F$ complete with respect to a nontrival absolute value $\nabs:F\to\R_{\ge0}$ such that the resulting topology is locally compact.
</div>
<div class="theorem">
Let $F$ be a local field and assume that $\Char(F)=0$. Then, $F$ is isomorphic (as a topological field) to one of the following:
<ul>
<li> The real numbers $\R$ </li>
<li> The complex numbers $\C$ </li>
<li> A finite extension of the $p$-adics $\Q_p$ for some prime $p$. </li>
</ul>
</div>
<p>The idea behind the proof is that $\Q\subset F$ since $F$ has characterisitc zero. Then, $\nabs$ restricts to an absolute value on $\Q$, so by Ostrowski it is either $\nabs_\infty$ or $\nabs_p$ for some prime $p$. Since $F$ is complete, it then must contain either $\R$ or $\Q_p$. After this, you need to show that $F/\R$ (or $F/\Q_p$) is a finite extension.</p>
<p>The actual last thing I’ll do is mention another definition of local fields in the non-Archimedean case.</p>
<div class="definition">
We say an absolute value $\nabs:F\to\R_{\ge0}$ is <b>discretely-valued</b> if $\abs F\subset\R$ is a discrete subgroup.
</div>
<p>For example, $\nabs_p$ is discretely valued for all primes $p$, since $\abs{\Q_p}_p=\abs{\Q}_p=p^{\Z}$ is isomorphic to the integers (as a topological group).</p>
<div class="definitoin">
Let $F$ be a field with a discrete absolute value $\nabs:F\to\R_{\ge0}$. Then, $A=\{x\in F:\abs x\le1\}$ is $F$'s <b>valuation ring</b>. It is a local ring whose unique maximal ideal is $\mfm=\{x\in F:\abs x< 1\}$. The field $k=A/\mfm$ is call the <b>residue field</b> of $F$.
</div>
<div class="exercise">
Let $\nabs:F\to\R_{\ge0}$ be an absolute value on a field $F$. Show that this makes $F$ a local field if and only if $\nabs$ is discretely valued with a finite residue field (and, of course, $F$ is complete w.r.t $\nabs$).
</div>
<div class="footnotes">
<ol>
<li id="fn:5">
<p>It will be hard for me to stop myself from doing algebra, but I don’t want this post to become unreasonably long. <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>I’ll probably forget to mention this when needed, so just always assume that I’m only considering nontrivial absolute values. <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
<li id="fn:1">
<p>If you don’t know what a topology is, don’t worry. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>This means that the inverse image of any open interval in R is a union of open balls in D. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>In the sense that its points get arbitrarily close to one another. <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>i.e. termwise but you have to be careful with division because of zeros in your sequence (just choose a representation without any). <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>At least, to do it the way I’ll do it… <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>If you haven’t seen the Z_{(p)} notation before, check out my <a href="../ufd-localization">localization post</a> <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>A potentially better title for this post might be “A Brief Introduction to Local Fields.” In it, I plan to introduce absolute values, proving some surprising facts about non-archimedean absolute values, and ending with a characterization of so-called local fields in characteristic 0. I’ll try to focus more on topological/analytic aspects 1, so this post will be less algebraic than most of my others. It will be hard for me to stop myself from doing algebra, but I don’t want this post to become unreasonably long. ↩A Quick Note on Rings with Mostly Zero Divisors2018-09-14T00:00:00+00:002018-09-14T00:00:00+00:00https://nivent.github.io/blog/quick-ring-zero<p>I apologize for the title of this post, but I couldn’t think of anything good. I had some random mathematical thoughts this morning that led into to a couple (hopefully interesting) questions. Before I forget, I want to write them down along with their resolutions.</p>
<p>Sadly, I don’t remember why, but I was thinking about conditions on when a localization is 0 this morning. Put more formally</p>
<div class="question">
Let $R$ be a commutative ring, and let $S\subset R$ be a multiplicative set. What sufficient and necessary conditions for $\sinv R=0$?
</div>
<p>The natural thing to expect is that this means exactly when $0\in S$, and indeed, one can try to prove this. It’s clear that $\sinv R=0$ when $0\in S$ since you’ve inverted $0$, so we only need to worry about the other direction. Suppose that $\sinv R=0$. Then, $a/s=0/1$ for all $a\in R$ and $s\in S$. For a fixed $a$ (and $s$), this means that there exists some $u\in S$ such that</p>
<script type="math/tex; mode=display">0=u(1a-0s)=ua</script>
<p>so $u=0$ or $a$ is a zero divisor. Well, let $a=1$; then it certinaly isn’t a zero divisor, so $u=0\in S$ and we win, except not quite. Our proof made use of $1\in R$, so we’ve really only showed the following.</p>
<div class="theorem">
Let $R$ be a commutative ring <b>with unity</b>, and let $S\subset R$ be a multiplicative set. Then, $\sinv R=0\iff0\in S$.
</div>
<p>However, the original question was phrased for all commutative rings, not just those with unity. This led me to wonder the following:</p>
<div class="question">
Does there exist a (commutative) ring $R$ (without unity) such that every element is a zero divisor?
</div>
<p>At this point, I don’t know a good way to detail my thought process in thinking about this, so I’ll just skip to the answer: yes. Specifically, take (the abelian group) $R=\zmod2$ and give it a ring structure by setting $0^2=1^2=0$. At first glance, one may worry that this doesn’t actually give a (consistent, well-defined, whatever) ring, but there’s nothing to worry since this is just the ideal $\{0,2\}$ generated by $2$ in the ring $\zmod4$ <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>.</p>
<p>Sweet, got that figured out. After this, the next natural thing to ask was if we can get an analagous phenomenon in the presence of units?</p>
<div class="question">
Does there exist a commutative ring $R$ with unity such that every element is a unit or a zero divisor?
</div>
<p>I hope the last condition seemed weird when you first read the question, and I hope even more that this one seems strange now; I know they both did to me <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>. Because of that, I suspected that the answer to this question was no, and started thinking about contradictions I could reach from such a ring. This led me to the following.</p>
<div class="theorem">
Let $R$ be a commutative ring with unity such that every element is a unit or a zero divisor. Then, $R$ is a local ring.
</div>
<div class="proof4">
Let $\mfm=\brackets{r\in R:r\text{ is a zero divisor}}\subset R$. We claim that $\mfm$ is an ideal. Since it is literally the set of all non-units, this is enough to prove that $R$ is local. Fix any $r\in R$ and $a,b\in\mfm$. Then, there exist $x,y\in R$ (even in $\mfm$) s.t. $ax=0=by$. Hence, $rax=0$ so $(ra)\in\mfm$ and $xy(a+b)=(ax)y+(by)x=0$ so $a+b\in\mfm$.
</div>
<p>I’m gonna be honest; I’m not sure if this helps answer the question or not, but it’s something. Since I had no luck deriving a contradiction from this, I decided to instead try and construct such an $R$, and as it turns out, they do exist. Take $R=\F_2[x]/(x^2)$. As a set, $R=\brackets{0,1,x,1+x}$; we see that $0,x$ are zero divisors while $1,1+x$ are units as $(1+x)^2=x^2+2x+1=1$.</p>
<p>Finally, I never actually answered my original question. That is, does $\sinv R=0\implies0\in S$ where $R$ is a commutative ring? The $\brackets{0,2}\subset\zmod4$ from the second question doesn’t give a counterexample to this because all of its multiplicative subsets contain 0. As it turns out, the answer to this question is yes. There are no counterexamples. To see this we only need to modify our original proof slightly.</p>
<div class="theorem">
Let $R$ be a commutative ring, and let $S\subset R$ be a multiplicative set such that $\sinv R=0$. Then, $0\in S$.
</div>
<div class="proof4">
Fix any $s\in S$. Then, $s/s=0/1$ so there exists some $u\in S$ such that $us=0$. However, $S$ is multiplicative, so since $s,u\in S$ we must also have $us=0\in S$.
</div>
<p>Luckily for me, I didn’t think to make the numerator an element of $S$ when I first thought about this question; if I had, I would have missed out on the fun of the other two questions.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>ideals are (sub)rings when you don’t require rings to have unity <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>I really like examples of strange phenomona in math <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>I apologize for the title of this post, but I couldn’t think of anything good. I had some random mathematical thoughts this morning that led into to a couple (hopefully interesting) questions. Before I forget, I want to write them down along with their resolutions.Generic Wrappers in C++2018-09-07T00:00:00+00:002018-09-07T00:00:00+00:00https://nivent.github.io/blog/generic-wrappers<p>For a blog entitled “Thoughts of a Programmer,” I don’t actually have many CS-related posts; a more accurate name of this blog would be something like “Grammatically Flawed Ramblings of a Not Quite Mathematician”<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>. However, I recently wrote some code I think might be worth sharing, so this is a chance for me to return to this blog’s roots <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>. The main goal of this post is to explain the code written in <a href="https://github.com/NivenT/jubilant-funicular/tree/master/include/nta/Wrapper.h">this beautiful file</a>. This code automates the process of creating a wrapper class, so next time I want to have a “named int” class, I won’t need to write<sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup></p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">NamedInt</span> <span class="p">{</span>
<span class="k">private</span><span class="o">:</span>
<span class="k">typedef</span> <span class="kt">int</span> <span class="n">inner</span><span class="p">;</span>
<span class="kt">int</span> <span class="n">data</span><span class="p">;</span>
<span class="k">public</span><span class="o">:</span>
<span class="n">NamedInt</span><span class="p">(</span><span class="kt">int</span> <span class="n">data</span> <span class="o">=</span> <span class="mi">0</span><span class="p">)</span> <span class="o">:</span> <span class="n">data</span><span class="p">(</span><span class="n">data</span><span class="p">)</span> <span class="p">{}</span>
<span class="n">NamedInt</span> <span class="k">operator</span><span class="o">+</span><span class="p">(</span><span class="k">const</span> <span class="n">NamedInt</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span> <span class="k">return</span> <span class="n">data</span> <span class="o">+</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">;</span> <span class="p">}</span>
<span class="c1">// All the other operators...
</span><span class="p">};</span>
</code></pre></div></div>
<p>To avoid (some) confusion, I’ll adopt the convention that the type being wrapped (e.g. <code class="highlighter-rouge">int</code> in the above example) will be called the inner type in contrast to the wrapper type (e.g. <code class="highlighter-rouge">NamedInt</code> in the above example).</p>
<h1 id="motivation">Motivation</h1>
<p>Why create wrapper classes in the first place? The int type already exists in C++, so what’s the point of creating a class whose whole point is simulating an integer? Long story short, the main reason is type safety. Programmers are fond of making mistakes, so anything we can do to make it harder for our mistakes to compile is a plus. One of the standard examples of what I’m talking about comes from writing code involving units. Imagine you’re writing code to control a robot. Imagine further that this robot looks around its environment for interesting things to pick up, and then goes and picks up that thing if its close enough. You may implement this with code like</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">Robot</span> <span class="p">{</span>
<span class="k">private</span><span class="o">:</span>
<span class="kt">float</span> <span class="n">goto_threshold</span><span class="p">;</span>
<span class="c1">// other stuff...
</span><span class="nl">public:</span>
<span class="kt">bool</span> <span class="nf">should_goto</span><span class="p">(</span><span class="kt">float</span> <span class="n">interesting_object_distance</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">interesting_object_distance</span> <span class="o"><</span> <span class="n">goto_threshold</span><span class="p">;</span>
<span class="p">}</span>
<span class="c1">// other stuff...
</span><span class="p">};</span>
</code></pre></div></div>
<p>Seems simple enough. However, you probably don’t want your robot wandering too far away to pick up random items because your testing area is small or because the further it goes, the more likely it is to crash into something or because whatever; hence, you set <code class="highlighter-rouge">goto_threshold = 50; // feet</code>. However, computer vision has come a long way <sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup>, so its detect objects up to 3000 feet away; to keep numbers somewhat small, this causes you to store <code class="highlighter-rouge">interesting_object_distance</code> in yards instead of feet. This is a problem. It means <code class="highlighter-rouge">should_goto</code> is comparing feet to yards, so your robot is bound to have some funky behavior <sup id="fnref:5"><a href="#fn:5" class="footnote">5</a></sup>. The best solution to a problem like this is to introduct a <code class="highlighter-rouge">Feet</code> class wrapping the float type and another <code class="highlighter-rouge">Yard</code> class also wrapping the float type. Then, you can declare <code class="highlighter-rouge">Feet goto_threshold = 50;</code> and <code class="highlighter-rouge">bool should_goto(Yard interesting_object_distance);</code>. Since <code class="highlighter-rouge">Feet</code> and <code class="highlighter-rouge">Yard</code> are different types, they (should) be incomparable, so the compiler will yell at this faulty comparison.</p>
<p>For me, the desire the use wrapper types came from an ambiguous function error. While working on a project, I wrote the following code:</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">ECS</span> <span class="p">{</span>
<span class="k">public</span><span class="o">:</span>
<span class="kt">void</span> <span class="n">broadcast</span><span class="p">(</span><span class="k">const</span> <span class="n">Message</span><span class="o">&</span> <span class="n">msg</span><span class="p">,</span> <span class="n">EntityID</span> <span class="n">entity</span><span class="p">);</span>
<span class="kt">void</span> <span class="n">broadcast</span><span class="p">(</span><span class="k">const</span> <span class="n">Message</span><span class="o">&</span> <span class="n">msg</span><span class="p">,</span> <span class="n">ComponentListID</span> <span class="n">lists</span><span class="p">);</span>
<span class="p">};</span>
</code></pre></div></div>
<p>and the compiler was not happy. This is because, while it looks like these functions have different signatures, elsewhere in my codebase I had written</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">typedef</span> <span class="kt">uint64_t</span> <span class="n">EntityID</span><span class="p">;</span>
<span class="k">typedef</span> <span class="kt">uint64_t</span> <span class="n">ComponentListID</span><span class="p">;</span>
</code></pre></div></div>
<p>I won’t get into the reasons that I needed these two similar looking functions; just know that they are semantically different, and in general, it makes sense to think of EntityIDs as being different from ComponentListIDs even if they’re essentially both just (unsigned long) integers. Because of this, I decided that it would be in my best interest to make <code class="highlighter-rouge">EntityID</code> and <code class="highlighter-rouge">ComponentListID</code> wrappers around <code class="highlighter-rouge">uint64_t</code> instead of just typedefs. Since this is the only place in the codebase I need wrapper types, I could just have written a couple wrapper classes and called it a day, but what’s the fun in that? Instead, I decided I needed to find a generic way to create a wrapper around any class so that the new wrapper class inherits all the methods (or at least all the operations) of the class it wraps. This way, not only would <code class="highlighter-rouge">EntityID</code> and <code class="highlighter-rouge">ComponentListID</code> be different types (removing the ambiguity in the definition of ECS::broadcast), but I would also still be able to do things like</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">EntityID</span> <span class="n">a</span> <span class="o">=</span> <span class="mi">1</span><span class="p">,</span> <span class="n">b</span> <span class="o">=</span> <span class="mi">3</span><span class="p">;</span>
<span class="n">a</span> <span class="o">+=</span> <span class="n">b</span><span class="p">;</span>
<span class="k">if</span> <span class="p">(</span><span class="n">a</span> <span class="o">&</span> <span class="n">b</span><span class="p">)</span> <span class="n">cout</span><span class="o"><<</span><span class="n">a</span><span class="o"><<</span><span class="n">endl</span><span class="p">;</span>
</code></pre></div></div>
<p>but, you know, more relevant to my project.</p>
<h1 id="first-attempt">First Attempt</h1>
<p>Ok <sup id="fnref:12"><a href="#fn:12" class="footnote">6</a></sup>, so we want an automated way to create wrapper classes over arbitrary <sup id="fnref:7"><a href="#fn:7" class="footnote">7</a></sup> types. Well, when creating generic types in C++, the goto method is templates. As such, the first thing we might try to do is something like this:</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">class</span> <span class="nc">Wrapper</span> <span class="p">{</span>
<span class="k">private</span><span class="o">:</span>
<span class="n">T</span> <span class="n">data</span><span class="p">;</span>
<span class="k">public</span><span class="o">:</span>
<span class="n">Wrapper</span><span class="p">(</span><span class="n">T</span> <span class="n">d</span><span class="p">)</span> <span class="o">:</span> <span class="n">data</span><span class="p">(</span><span class="n">d</span><span class="p">)</span> <span class="p">{}</span>
<span class="n">Wrapper</span> <span class="k">operator</span><span class="o">+</span><span class="p">(</span><span class="k">const</span> <span class="n">Wrapper</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span> <span class="k">return</span> <span class="n">data</span> <span class="o">+</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">;</span> <span class="p">}</span>
<span class="n">Wrapper</span> <span class="k">operator</span><span class="o">-</span><span class="p">(</span><span class="k">const</span> <span class="n">Wrapper</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span> <span class="k">return</span> <span class="n">data</span> <span class="o">-</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">;</span> <span class="p">}</span>
<span class="c1">// other operations
</span><span class="p">};</span>
<span class="k">typedef</span> <span class="n">Wrapper</span><span class="o"><</span><span class="kt">uint64_t</span><span class="o">></span> <span class="n">EntityID</span><span class="p">;</span>
<span class="k">typedef</span> <span class="kt">uint64_t</span> <span class="n">ComponentListID</span><span class="p">;</span>
</code></pre></div></div>
<p>The first thing you probably noticed about this approach is that <code class="highlighter-rouge">EntityID</code> became a wrapper while <code class="highlighter-rouge">ComponentListID</code> remained a primitive type. This is because using templates only allows for one wrapper class around a given type. Thus, if you were to need, for example, a third type of ID (e.g. a <code class="highlighter-rouge">ComponentID</code>), this approach would be no use. However, there is an even larger issue with this approach; it is not truly generic. This gives you a way of creating wrapper classes, but it only let’s you wrap types that support a given set of operations! For example, with things as they are above, you wouldn’t be able to use <code class="highlighter-rouge">Wrapper<std::string></code> in your code because you can’t substract strings. In practise, this means that if you want to wrap some numeric type (and so want your wrapper template to support all arithmetic operations) like int or uint64_t, then you will only be able to wrap types that support all arithmetic operations. This is horribly restrictive.</p>
<p>We need a better solution. One that let’s us create aribtarily many (different) types wrapping the same inner, and one that causes the wrapper to inherit all the operations supported by the inner type (without causing compiler errors).</p>
<p>The first issue is easy to fix. Templates only create one type wrapper class per inner type? No problem; just use macros instead. By writing</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#define CREATE_WRAPPER(name, inner) \
class name { \
private: \
inner data; \
public: \
name(inner d) : data(d) {} \
name operator+(const name& rhs) const { return data + rhs.data; } \
name operator-(const name& rhs) const { return data - rhs.data; } \
</span><span class="cm">/* other operations */</span><span class="cp"> \
};
</span><span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">EntityID</span><span class="p">,</span> <span class="kt">uint64_t</span><span class="p">)</span>
<span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">ComponentListID</span><span class="p">,</span> <span class="kt">uint64_t</span><span class="p">)</span>
</code></pre></div></div>
<p>we give each class a unique name, so there’s no worry if two classes wrap the same type. However, there’s still the issue that writing <code class="highlighter-rouge">CREATE_WRAPPER(FirstName, std::string)</code> would cause the compiler to yell at us. At this point, I encourage you to pause and spend a few minutes thinking about how we could fix this. <sup id="fnref:9"><a href="#fn:9" class="footnote">8</a></sup></p>
<h1 id="why-this-should-be-possible">Why This Should Be Possible</h1>
<p>Given how well the first attempt went, it should seem like this might not be possible in C++, or at least that it would be very hard to do. Nevertheless, I would like to spend this section giving a few reasons for why you might believe that this is actually doable with reasonable effort.</p>
<p>First, if it were not possible, then this blog post probably wouldn’t exist.</p>
<p>That actually isn’t a very good reason, so here’s one that’s a little better. This type of thing is possible in other languages. Explicitly, using mypy with Python, you can write something like</p>
<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">typing</span> <span class="kn">import</span> <span class="n">NewType</span>
<span class="n">EntityID</span> <span class="o">=</span> <span class="n">NewType</span><span class="p">(</span><span class="s">'Entity'</span><span class="p">,</span> <span class="nb">int</span><span class="p">)</span>
<span class="n">ComponentListID</span> <span class="o">=</span> <span class="n">NewType</span><span class="p">(</span><span class="s">'ComponentListID'</span><span class="p">,</span> <span class="nb">int</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">broadcast</span><span class="p">(</span><span class="n">entity</span><span class="p">:</span> <span class="n">EntityID</span><span class="p">):</span>
<span class="k">print</span><span class="p">(</span><span class="s">'Entity'</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">broadcast</span><span class="p">(</span><span class="n">lists</span><span class="p">:</span> <span class="n">ComponentListID</span><span class="p">):</span>
<span class="k">print</span><span class="p">(</span><span class="s">'ComponentList'</span><span class="p">)</span>
<span class="n">broadcast</span><span class="p">(</span><span class="mi">3</span><span class="p">)</span> <span class="c"># mypy complains because int is not the same as EntityID or ComponentListID</span>
<span class="n">broadcast</span><span class="p">(</span><span class="n">EntityID</span><span class="p">(</span><span class="mi">3</span><span class="p">))</span> <span class="c"># prints Entity</span>
<span class="n">broadcast</span><span class="p">(</span><span class="n">ComponentListID</span><span class="p">(</span><span class="mi">3</span><span class="p">))</span> <span class="c"># prints ComponentList</span>
</code></pre></div></div>
<p>Now, I’m no Python expert <sup id="fnref:6"><a href="#fn:6" class="footnote">9</a></sup>, but I have spent some time with mypy. Based on what I recall, my understanding is that this is implemented by using a “layered” approach. To python, EntityID=ComponentListID=int, so all three types support the same operations; however, mypy “artificially” considers them as separate types and keeps track of which type is used where via static analysis; this let’s it catch bugs like the call to <code class="highlighter-rouge">broadcast(3)</code>. Unfortunately, in producing similar functionality in C++, we’re constrained to work within the language and not outside of it like mypy does with Python. Still, the fact that any kind solution exists to our problem is suggestive that a solution appropriate to our needs exists as well.</p>
<p>The third, final, and (probably) most convincing reason to believe this is possible is that all the information we need should be available at compile-time. right? Our big issue is that we want wrappers to automatically support the same operations as the inner type. Well, when we write (something like) <code class="highlighter-rouge">CREATE_WRAPPER(EntityID, uint64_t)</code>, the compiler knows all the type information associated to <code class="highlighter-rouge">uint64_t</code>, including all its supported operations. Hence, it stands to reason that we should be able to take advantage of this to have <code class="highlighter-rouge">EntityID</code> support those same operations. As we will see in the next section, this can be done <sup id="fnref:8"><a href="#fn:8" class="footnote">10</a></sup>.</p>
<h1 id="the-basic-idea">The Basic Idea</h1>
<p>The basic idea is to have two versions of each operation. In psuedo-code, we want to write</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">Wrapper</span> <span class="n">Wrapper</span><span class="o">::</span><span class="k">operator</span><span class="o">+</span><span class="p">(</span><span class="k">const</span> <span class="n">Wrapper</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span>
<span class="k">if</span> <span class="p">(</span><span class="n">Wrapper</span><span class="o">::</span><span class="n">inner</span> <span class="n">supports</span> <span class="n">addition</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">data</span> <span class="o">+</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">;</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="n">unsupported_operation_behavior</span><span class="p">();</span>
<span class="p">}</span>
<span class="p">}</span>
</code></pre></div></div>
<p>It seems to me that there’s no clear choice for what <code class="highlighter-rouge">unsupported_operation_behavior()</code> should do. Ideally, this code never gets run because you shouldn’t try to do anything with your wrapper types that you couldn’t even do on the inner type. Currently, I return a default wrapper type in these situations (in the project I’m working on), but another reasonable action would be to crash the program.</p>
<p>The question now becomes: how do we exploit compiletime information in order to know whether an arbitrary type supports addition? For questions like this, <a href="https://en.cppreference.com/w/cpp/header/type_traits">the < type_traits > header</a> comes in handy. First, we write a custom class that checks for the existence of an addition operator.</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">namespace</span> <span class="n">check</span> <span class="p">{</span>
<span class="k">struct</span> <span class="n">Nop</span> <span class="p">{};</span>
<span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="p">,</span> <span class="k">typename</span> <span class="n">U</span><span class="o">></span> <span class="n">Nop</span> <span class="k">operator</span><span class="o">+</span><span class="p">(</span><span class="k">const</span> <span class="n">T</span><span class="o">&</span><span class="p">,</span> <span class="k">const</span> <span class="n">U</span><span class="o">&</span><span class="p">);</span>
<span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="p">,</span> <span class="k">typename</span> <span class="n">U</span><span class="o">></span>
<span class="k">struct</span> <span class="n">AddExists</span> <span class="p">{</span>
<span class="k">enum</span> <span class="p">{</span> <span class="n">value</span> <span class="o">=</span> <span class="o">!</span><span class="n">std</span><span class="o">::</span><span class="n">is_same</span><span class="o"><</span><span class="k">decltype</span><span class="p">(</span><span class="o">*</span><span class="p">(</span><span class="n">T</span><span class="o">*</span><span class="p">)(</span><span class="mi">0</span><span class="p">)</span> <span class="o">+</span> <span class="o">*</span><span class="p">(</span><span class="n">U</span><span class="o">*</span><span class="p">)(</span><span class="mi">0</span><span class="p">)),</span> <span class="n">Nop</span><span class="o">>::</span><span class="n">value</span> <span class="p">};</span>
<span class="p">};</span>
<span class="c1">// AddExists<int>::value == 1
</span> <span class="c1">// AddExists<Nop>::value == 0
</span> <span class="c1">// AddExists<int, std::string>::value == 0
</span><span class="p">}</span>
</code></pre></div></div>
<p>The above code sets up a default addition operator returning a custom <code class="highlighter-rouge">Nop</code> type for any pair of types that doesn’t support addition. Then, to find out whether two types <code class="highlighter-rouge">T</code> and <code class="highlighter-rouge">U</code> do support addition, it simple checks if adding them returns this <code class="highlighter-rouge">Nop</code> type or something else.</p>
<p>Given this, we can implement addition on our Wrapper type with code like the below (still in our macro):</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">auto</span> <span class="n">__Add</span><span class="p">(</span><span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="o">-></span> <span class="k">decltype</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><</span><span class="n">check</span><span class="o">::</span><span class="n">AddExists</span><span class="o"><</span><span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">,</span> <span class="n">Wrapper</span><span class="o">></span><span class="p">{})</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">lhs</span> <span class="o">+</span> <span class="n">rhs</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">auto</span> <span class="n">__Add</span><span class="p">(</span><span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="o">-></span> <span class="k">decltype</span><span class="p">(</span><span class="o">!</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><</span><span class="n">check</span><span class="o">::</span><span class="n">AddExists</span><span class="o"><</span><span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">,</span> <span class="n">Wrapper</span><span class="o">></span><span class="p">{})</span> <span class="p">{</span>
<span class="n">unsupported_operation_behavior</span><span class="p">();</span>
<span class="p">}</span>
<span class="n">Wrapper</span> <span class="n">Wrapper</span><span class="o">::</span><span class="k">operator</span><span class="o">+</span><span class="p">(</span><span class="k">const</span> <span class="n">Wrapper</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">__Add</span><span class="o"><</span><span class="n">Wrapper</span><span class="o">::</span><span class="n">inner</span><span class="o">></span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>
<p>I think with some thought, the above code is mostly self-explanatory. It creates two <code class="highlighter-rouge">__Add</code> functions, one for types that support addition and one for types that don’t. When adding <code class="highlighter-rouge">Wrapper</code>s, the actual computation is relegated to the <code class="highlighter-rouge">__Add</code> functions, and C++ automatically knows which one to invoke since it knows (at compile time) whether or not <code class="highlighter-rouge">Wrapper::inner</code> supports addition. If you are using C++17, then the above code can be simplified a little, becoming <sup id="fnref:10"><a href="#fn:10" class="footnote">11</a></sup></p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="n">Wrapper</span> <span class="n">__Add</span><span class="p">(</span><span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="p">{</span>
<span class="k">if</span> <span class="k">constexpr</span> <span class="p">(</span><span class="n">check</span><span class="o">::</span><span class="n">AddExists</span><span class="o"><</span><span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">lhs</span> <span class="o">+</span> <span class="n">rhs</span><span class="p">;</span>
<span class="p">}</span> <span class="k">else</span> <span class="p">{</span>
<span class="n">unsupported_operation_behavior</span><span class="p">();</span>
<span class="p">}</span>
<span class="p">}</span>
<span class="n">Wrapper</span> <span class="n">Wrapper</span><span class="o">::</span><span class="k">operator</span><span class="o">+</span><span class="p">(</span><span class="k">const</span> <span class="n">Wrapper</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="k">const</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">__Add</span><span class="o"><</span><span class="n">Wrapper</span><span class="o">::</span><span class="n">inner</span><span class="o">></span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">);</span>
<span class="p">}</span>
</code></pre></div></div>
<p>which is a little closer to the psuedo-code at the beginning of this section.</p>
<p>The same approach we used for addition can be used with all other operations (potentially with minor modifications). Thus, by writing lots of analagous code (and maybe some helper macros), you can use this basic idea to create our desired generic wrapper class!</p>
<h1 id="some-hiccups">Some Hiccups</h1>
<p>As is often the case, the full story is not as simple as one would like. When implementing this, I ran into a few issues after figuring out the basic idea outlined in the previous section. Unfortunately, I have waited long enough between writing that code and this post that I no longer recall what all those issues were. The only one that comes to mind is printing.</p>
<p>In C++, in order to support printing (via <code class="highlighter-rouge">cout<<</code>) a custom type <code class="highlighter-rouge">Wrapper</code>, you need to implement a function with signature</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="k">operator</span><span class="o"><<</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span><span class="p">,</span> <span class="k">const</span> <span class="n">Wrapper</span><span class="o">&</span><span class="p">);</span>
</code></pre></div></div>
<p>At first, you may think you can do to same thing we did for addition and write (where I ommited the <code class="highlighter-rouge">\</code> after each line in order to have helpful syntax highlighting) <sup id="fnref:11"><a href="#fn:11" class="footnote">12</a></sup></p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#define CREATE_WRAPPER(name, inner) </span><span class="cm">/* This macro includes all the below code plus other stuff */</span><span class="cp">
</span><span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">static</span> <span class="k">auto</span> <span class="n">__Print</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="o">-></span> <span class="k">decltype</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><</span><span class="n">check</span><span class="o">::</span><span class="n">LShiftExists</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="p">,</span> <span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&></span><span class="p">{})</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">lhs</span><span class="o"><<</span><span class="err">#</span><span class="n">name</span><span class="o"><<</span><span class="s">"("</span><span class="o"><<</span><span class="n">rhs</span><span class="o"><<</span><span class="s">")"</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">static</span> <span class="k">auto</span> <span class="n">__Print</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="o">-></span> <span class="k">decltype</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><!</span><span class="n">check</span><span class="o">::</span><span class="n">LShiftExists</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="p">,</span> <span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&></span><span class="p">{})</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">lhs</span><span class="o"><<</span><span class="err">#</span><span class="n">name</span><span class="p">;</span>
<span class="p">}</span>
<span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="k">operator</span><span class="o"><<</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">name</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="n">name</span><span class="o">::</span><span class="n">__Print</span><span class="o"><</span><span class="n">inner</span><span class="o">></span><span class="p">(</span><span class="n">lhs</span><span class="p">,</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">);</span>
<span class="p">}</span>
<span class="cm">/* end macro stuff */</span>
<span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">EntityID</span><span class="p">,</span> <span class="kt">uint64_t</span><span class="p">)</span>
<span class="c1">// cout<<EntityID(4); should print out "EntityID(4)"
</span><span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">Test</span><span class="p">,</span> <span class="n">glm</span><span class="o">::</span><span class="n">vec2</span><span class="p">)</span>
<span class="c1">// cout<<Test(glm::vec2(0)); should print out "Test"
</span></code></pre></div></div>
<p>However, there are two issues with the above code. First, if you scroll all the way to the right, you will notice that the return type of <code class="highlighter-rouge">__Print</code> is more-or-less</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">decltype</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">true_type</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&></span><span class="p">{})</span>
</code></pre></div></div>
<p>which is a problem. I won’t get into the details, but suffice it to say, you can’t (easily) use <code class="highlighter-rouge">std::enalble_if/std::enable_if_t</code> with reference types. This is basically because <code class="highlighter-rouge">std::enable_if</code> is a struct (potentially) with a field of the type specified in its (second) parameter. If this is a reference type, then we need to give it a reference when it’s constructed but here we want to give it an empty construction via <code class="highlighter-rouge">{}</code>. At the same time, you can’t simple replace <code class="highlighter-rouge">std::ostream&</code> with <code class="highlighter-rouge">std::ostream</code> because, among other reasons, the copy constructor for <code class="highlighter-rouge">std::ostream</code> is deleted. In the end, the solution I took was to replace the reference with a pointer, writing</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#define CREATE_WRAPPER(name, inner) </span><span class="cm">/* This macro includes all the below code plus other stuff */</span><span class="cp">
</span><span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">static</span> <span class="k">auto</span> <span class="n">__Print</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="o">-></span> <span class="k">decltype</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><</span><span class="n">check</span><span class="o">::</span><span class="n">LShiftExists</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="p">,</span> <span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">*></span><span class="p">{})</span> <span class="p">{</span>
<span class="n">lhs</span><span class="o"><<</span><span class="err">#</span><span class="n">name</span><span class="o"><<</span><span class="s">"("</span><span class="o"><<</span><span class="n">rhs</span><span class="o"><<</span><span class="s">")"</span><span class="p">;</span>
<span class="k">return</span> <span class="o">&</span><span class="n">lhs</span><span class="p">;</span>
<span class="p">}</span>
<span class="k">template</span><span class="o"><</span><span class="k">typename</span> <span class="n">T</span><span class="o">></span>
<span class="k">static</span> <span class="k">auto</span> <span class="n">__Print</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">T</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="o">-></span> <span class="k">decltype</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">enable_if_t</span><span class="o"><!</span><span class="n">check</span><span class="o">::</span><span class="n">LShiftExists</span><span class="o"><</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="p">,</span> <span class="n">T</span><span class="o">>::</span><span class="n">value</span><span class="p">,</span> <span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">*></span><span class="p">{})</span> <span class="p">{</span>
<span class="n">lhs</span><span class="o"><<</span><span class="err">#</span><span class="n">name</span><span class="p">;</span>
<span class="k">return</span> <span class="o">&</span><span class="n">lhs</span><span class="p">;</span>
<span class="p">}</span>
<span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="k">operator</span><span class="o"><<</span><span class="p">(</span><span class="n">std</span><span class="o">::</span><span class="n">ostream</span><span class="o">&</span> <span class="n">lhs</span><span class="p">,</span> <span class="k">const</span> <span class="n">name</span><span class="o">&</span> <span class="n">rhs</span><span class="p">)</span> <span class="p">{</span>
<span class="k">return</span> <span class="o">*</span><span class="n">name</span><span class="o">::</span><span class="n">__Print</span><span class="o"><</span><span class="n">inner</span><span class="o">></span><span class="p">(</span><span class="n">lhs</span><span class="p">,</span> <span class="n">rhs</span><span class="p">.</span><span class="n">data</span><span class="p">);</span>
<span class="p">}</span>
<span class="cm">/* end macro stuff */</span>
<span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">EntityID</span><span class="p">,</span> <span class="kt">uint64_t</span><span class="p">)</span>
<span class="c1">// cout<<EntityID(4); should print out "EntityID(4)"
</span><span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">Test</span><span class="p">,</span> <span class="n">glm</span><span class="o">::</span><span class="n">vec2</span><span class="p">)</span>
<span class="c1">// cout<<Test(glm::vec2(0)); should print out "Test"
</span></code></pre></div></div>
<p>This leaves only one last issue before we’re satisfied: all this code is in a header file. Recall that we wanted a generic way to create wrapper types, and settled and using a macro to do so. Well, that macro (and all its encapsulated code) are going to be stored in a single header file. Since we want printing to be supported without further user intervention, this printing code will be in the header as well. However, <code class="highlighter-rouge">std::ostream& operator<<(std::ostream&, const name&)</code> is a regular function and not a method of our wrapper class. This means that the compiler will have a fun time yelling at us about this function having multiple definitions the second two files include this header. Thankfully, this has a simple fix. All you need to do is mark the function <code class="highlighter-rouge">inline</code>, and the compiler shuts up.</p>
<h1 id="the-finished-product">The Finished Product</h1>
<p>At last, we have everything we need in order create generic wrappers. If you want to see a complete implementation of this, then <a href="https://github.com/NivenT/jubilant-funicular/blob/master/include/nta/Wrapper.h">check out the file I referenced at the beginning</a>. If you want to see these wrapper types in action, then that project has you covered as well. In particular, <a href="https://github.com/NivenT/jubilant-funicular/blob/master/tests/wrapper_tests.cpp">here</a> are tests for the wrapper macro, and <a href="https://github.com/NivenT/jubilant-funicular/blob/master/src/ECS.cpp">here</a> is a wrapper type (<code class="highlighter-rouge">EntityID</code>) being used in the wild. For conveience, I’ll end with a (modified) snippet of the tests file I just linked to</p>
<div class="language-c++ highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="cp">#include "Wrapper.h"
#include "utils.h"
</span>
<span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">Inches</span><span class="p">,</span> <span class="kt">int</span><span class="p">)</span>
<span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">Feet</span><span class="p">,</span> <span class="kt">int</span><span class="p">)</span>
<span class="n">CREATE_WRAPPER</span><span class="p">(</span><span class="n">Name</span><span class="p">,</span> <span class="n">string</span><span class="p">)</span>
<span class="kt">int</span> <span class="n">main</span><span class="p">(</span><span class="kt">int</span> <span class="n">argc</span><span class="p">,</span> <span class="kt">char</span><span class="o">*</span> <span class="n">argv</span><span class="p">[])</span> <span class="p">{</span>
<span class="n">Inches</span> <span class="n">a</span> <span class="o">=</span> <span class="mi">12</span><span class="p">;</span>
<span class="n">Feet</span> <span class="n">b</span> <span class="o">=</span> <span class="mi">1</span><span class="p">;</span>
<span class="n">Name</span> <span class="n">c</span><span class="p">(</span><span class="s">"Steve"</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">utils</span><span class="o">::</span><span class="n">to_string</span><span class="p">(</span><span class="n">a</span><span class="p">)</span> <span class="o">==</span> <span class="s">"Inches(12)"</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">utils</span><span class="o">::</span><span class="n">to_string</span><span class="p">(</span><span class="n">b</span><span class="p">)</span> <span class="o">==</span> <span class="s">"Feet(1)"</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">utils</span><span class="o">::</span><span class="n">to_string</span><span class="p">(</span><span class="n">c</span><span class="p">)</span> <span class="o">==</span> <span class="s">"Name(Steve)"</span><span class="p">);</span>
<span class="c1">// Confirms that you can't check for equality between Inches and Feet even though they wrap the same type
</span> <span class="k">if</span> <span class="p">((</span><span class="kt">int</span><span class="p">)</span><span class="n">check</span><span class="o">::</span><span class="n">EqualsExists</span><span class="o"><</span><span class="n">Inches</span><span class="p">,</span> <span class="n">Feet</span><span class="o">>::</span><span class="n">value</span> <span class="o">==</span> <span class="mi">1</span><span class="p">)</span> <span class="p">{</span>
<span class="n">assert</span><span class="p">(</span><span class="nb">false</span><span class="p">);</span>
<span class="p">}</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">==</span> <span class="mi">12</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">+</span> <span class="mi">5</span> <span class="o">==</span> <span class="mi">17</span><span class="p">);</span>
<span class="n">assert</span><span class="p">((</span><span class="n">b</span> <span class="o"><<</span> <span class="mi">3</span><span class="p">)</span> <span class="o">==</span> <span class="mi">8</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span><span class="o">++</span> <span class="o">==</span> <span class="mi">12</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">==</span> <span class="mi">13</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="o">++</span><span class="n">b</span> <span class="o">==</span> <span class="mi">2</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="o">~</span><span class="n">Inches</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> <span class="o">==</span> <span class="n">Inches</span><span class="p">(</span><span class="o">~</span><span class="mi">0</span><span class="p">));</span>
<span class="n">a</span> <span class="o">*=</span> <span class="mi">5</span><span class="p">;</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">==</span> <span class="mi">60</span><span class="p">);</span>
<span class="n">b</span> <span class="o">/=</span> <span class="mi">2</span><span class="p">;</span>
<span class="n">assert</span><span class="p">(</span><span class="n">b</span> <span class="o">==</span> <span class="mi">1</span><span class="p">);</span>
<span class="n">a</span> <span class="o">-=</span> <span class="mi">15</span><span class="p">;</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">==</span> <span class="mi">45</span><span class="p">);</span>
<span class="n">b</span> <span class="o">+=</span> <span class="mi">8</span><span class="p">;</span>
<span class="n">assert</span><span class="p">(</span><span class="n">b</span> <span class="o">==</span> <span class="mi">9</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">!=</span> <span class="mi">10</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o">></span> <span class="mi">3</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">b</span> <span class="o"><</span> <span class="mi">10</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">a</span> <span class="o"><=</span> <span class="mi">45</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">b</span> <span class="o">>=</span> <span class="mi">8</span><span class="p">);</span>
<span class="n">assert</span><span class="p">(</span><span class="n">c</span> <span class="o">+</span> <span class="n">string</span><span class="p">(</span><span class="s">" Rogers"</span><span class="p">)</span> <span class="o">==</span> <span class="n">Name</span><span class="p">(</span><span class="s">"Steve Rogers"</span><span class="p">));</span>
<span class="c1">// As a quirk of the implementation, you get an empty wrapper whenever
</span> <span class="c1">// an invalid operation is performed
</span> <span class="n">assert</span><span class="p">(</span><span class="n">c</span> <span class="o">-</span> <span class="n">Name</span><span class="p">(</span><span class="s">"Uh-oh"</span><span class="p">)</span> <span class="o">==</span> <span class="n">Name</span><span class="p">());</span>
<span class="k">return</span> <span class="mi">0</span><span class="p">;</span>
<span class="p">}</span>
</code></pre></div></div>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>As much as I like this name, I have put some thought into what I could rename this blog to in order to reflect its focus on mathematics. I think I like the sound of “In a Neighborhood of the Truth.” <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>Incidentally, this (apparently) marks the 2-year anniversary of my first CS post. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>I should go ahead and warn you now, this post will (probably) have a lot of (C++) code (snippets). <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>I you believe that you will eventually overcome the technical challenges involved it letting it chase after far off objects <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>If i remember correctly, the first Mars rover crashed because of a unit issue. Some engineer or some piece of code assumed it was getting feet when it was actually getting meters or something like that. <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:12">
<p>By the time I started writing this section, it was no longer the anniversary of my first CS post <a href="#fnref:12" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>I’m using the word “arbitrary” in a weak and ill-defined sense here. For my purposes, I really only need to be able to do this with integral types (and really only with uint64_t). However, having something that works for a larger class of types doesn’t hurt. <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>If you’re anything like me, your first thought is something like, “Rust has Traits and those would make this easy. Maybe I should write this project in Rust instead.” Unfortunately, if you are like me, it’s too late for you to write this project in Rust instead. <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>And certainly no mypy expert <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
<li id="fn:8">
<p>Because of the specifics of how we’ll do this, we actually “support” a superset of the operations of the inner type. <a href="#fnref:8" class="reversefootnote">↩</a></p>
</li>
<li id="fn:10">
<p>I haven’t actually confirmed that this compiles, so let me know if it doesn’t <a href="#fnref:10" class="reversefootnote">↩</a></p>
</li>
<li id="fn:11">
<p>Don’t worry if you don’t know what glm::vec2 is. All that’s important is that it doesn’t support printing via ostream’s <a href="#fnref:11" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>For a blog entitled “Thoughts of a Programmer,” I don’t actually have many CS-related posts; a more accurate name of this blog would be something like “Grammatically Flawed Ramblings of a Not Quite Mathematician”1. However, I recently wrote some code I think might be worth sharing, so this is a chance for me to return to this blog’s roots 2. The main goal of this post is to explain the code written in this beautiful file. This code automates the process of creating a wrapper class, so next time I want to have a “named int” class, I won’t need to write3 As much as I like this name, I have put some thought into what I could rename this blog to in order to reflect its focus on mathematics. I think I like the sound of “In a Neighborhood of the Truth.” ↩ Incidentally, this (apparently) marks the 2-year anniversary of my first CS post. ↩ I should go ahead and warn you now, this post will (probably) have a lot of (C++) code (snippets). ↩The Things That Keep Me Up At Night2018-08-25T00:00:00+00:002018-08-25T00:00:00+00:00https://nivent.github.io/blog/h1ra-is-0<p>Every now and then, I will find some theorem or exercise that is interesting and “obviously” doable, but despite my best efforts, I just can’t seem to figure out what is going on. Sometimes when this happens, I find myself unable to let go of this problem, and it enters my mathematical backlog, where it will continue to torment me until the day I die<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>. When I have free headspace, I like to return to these problems in hopes of finally having the insight I need; this post will be my attempt <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup> to resolve one of the things that more recently kept me up at night. In particular, we will show that<sup id="fnref:4"><a href="#fn:4" class="footnote">3</a></sup></p>
<script type="math/tex; mode=display">\hom^1(\R,A_{\R})=0</script>
<p>where $A$ is an abelian group and $A_{\R}$ is its associated constant sheaf on $\R$. A lot <sup id="fnref:6"><a href="#fn:6" class="footnote">4</a></sup> of me figuring out the details of showing this will be done as I write this post, so what follows while likely be rough and ill-organized.</p>
<h1 id="what-could-have-been">What Could Have Been</h1>
<p>In the end, this post turned out very differently from what I originally imagined. The original game plan was, roughly, introduce the sheaf inverse image and show that given a (continuous) function $f:X\to Y$ and a sheaf $\ms F$ on $Y$, one gets maps on cohomology</p>
<script type="math/tex; mode=display">\hom^k(Y,\ms F)\to\hom^k(X,\inv f\ms F)</script>
<p>After this, remark that the inverse image of a constant sheaf is a constant sheaf (!), so sheaf cohomology with constant coefficients is a (contravariant) functor. Finally, I would show that any two homotopic maps $f\simeq g:X\to Y$ induced the same map on cohomology</p>
<script type="math/tex; mode=display">\ast f=\ast g:\hom^k(Y, A_Y)\to\hom^k(X, A_X)</script>
<p>where $A$ is an abelian group and $A_Y$ is its associated constant sheaf on $Y$. This would mean that homotopy equivalent spaces have the same (sheaf) cohomology (with constant coefficients). Since $\R$ is contractible and any sheaf on a one-point space is flasque, this would prove that $\hom^1(\R,A_{\R})=0$.</p>
<p>However, while trying to figure out how to actually do all of that, I stumbled across this <a href="https://math.stackexchange.com/questions/2801221/why-does-the-sheaf-cohomology-of-the-constant-sheaf-on-mathbbr-vanish">stackexchange question</a>, which presents a much better <sup id="fnref:3"><a href="#fn:3" class="footnote">5</a></sup> proof of this fact. Hence, I will instead follow the lead of that question. <sup id="fnref:5"><a href="#fn:5" class="footnote">6</a></sup></p>
<p>I never did finish working out the details of my original plan, so it’s possible I return to it in the future. While I do not think it is the best method to achieve the aim of this post, I do think it would be nice to write up the details of why cohomology with constant coefficients is invariant under homotopy.</p>
<h1 id="the-proof">The Proof</h1>
<p>One of the big things about cohomology is that short exact sequences (of sheaves) give rise to long exact sequences in cohomology. Because of that, the claim that $\hom^1(\R, A_{\R})=0$ is equivalent to the following.</p>
<div class="theorem">
Let $0\to A_{\R}\xrightarrow f\ms B\xrightarrow g\ms C\to0$ be a short exact sequence of sheaves (of abelian groups) on $\R$. Then, $0\to A\to\ms B(\R)\to\ms C(\R)\to0$ is exact. In particular, $\ms B(\R)\to\ms C(\R)$ is surjective.
</div>
<div class="proof4">
Fix any $\gamma\in\ms C(\R)$, and let
$$\mc S=\brackets{\left.(U,\beta)\right|g_U(\beta)=\gamma\mid_U\text{ and }U\text{ is an open interval}}\text{ where }(U,\beta)\le(U',\beta')\iff U\subseteq U'\text{ and }\beta'\mid_U=\beta$$
Now, consider some chain $(U_1,\beta_1)\le(U_2,\beta_2)\le\dots$ in $\mc S$. Let $U=\bigcup_nU_n$ and let $\beta="\lim\beta_n"$. By $\lim\beta_n$ what I really mean is that $\{U_n\}$ gives an open cover of $U$ and because $\ms B$ is a sheaf, the various $\beta_n\in U_n$ patch together to form a unique element $\beta="\lim\beta_n"$ of $U$. Note that $U$ is an open interval and that $g_U(\beta)=\gamma\mid_U$ essentially because $\ms C$ is a sheaf. Thus, every chain in $\mc S$ has an upper bound, so Zorn's lemma implies that $\mc S$ has a maximal element $(V,\delta)$. We claim that $V=\R$, so $\delta\in\ms B(\R)$ is a preimage of $\gamma$. Suppose otherwise. By exactness at $\ms C$, there exists an open cover $\{U_n\}$ of $\R$ with elements $\beta_n\in U_n$ s.t. $g_{U_n}(\beta_n)=\gamma\mid_{U_n}$ for all $n$. By decomposing each $U_n$ into its connected components, and replacing the corresponding $\beta_n$ with its restrictions onto those components, we may assume that each $U_n$ is an open interval. Now, since $V\neq\R$, there must exist some $U_m$ s.t. $U_m\not\subset V$ and $U_m\cap V\neq\emptyset$. Let $V_m=U_m\cap V$, and note that
$$\delta\mid_{V_m}-\beta_m\mid_{V_m}\in\ker g=\im f$$
so there's some $\alpha_m\in A_{\R}(V_m)$ s.t. $f_{V_m}(\alpha_m)=\delta_{V_m}-\beta_m\mid_{V_m}$. Note that $V_m$ is an open interval, so $A_{\R}(V_m)=A=A_{\R}(\R)$ and the restriction map $A_{\R}(\R)\to A_{\R}(V_m)$ is the identity. Hence, $\alpha_m$ extends to a global section $\alpha\in A_{\R}(\R)$. Finally, $f_{U_m}(\alpha\mid_{U_m})+\beta_m\in\ms B(U_m)$ and $\delta\in\ms B(V)$ agree on $V_m$, so they patch together to form a (unique) $\beta\in\ms B(U_m\cup V)$. However,
$$g_{U_m\cup V}(\beta)\mid_{U_m}=g_{U_m}(\beta_m-f_{U_m}(\alpha\mid_{U_m}))=\gamma\mid_{U_m}$$
and
$$g_{U_m\cup V}(\beta)\mid_V=g_V(\delta)=\gamma\mid_V$$
so $g_{U_m\cup V}(\beta)=\gamma_{U_m\cup V}$. However, because $U_m\cup V\supsetneq V$, this contradicts $(V,\delta)$ being a maximal element. Thus, we must have had $V=\R$ all along, proving the theorem.
</div>
<div class="cor">
$\hom^1(\R,A_{\R})=0$
</div>
<div class="proof4">
Just embed $A_{\R}$ in any acyclic sheaf. i.e. consider any short exact sequence
$$0\to A_{\R}\to\ms B\to\ms C\to0$$
where $\ms B$ is acyclic. Then, looking at the long exact sequence on cohomology, we see
$$0\to\hom^0(\R, A_{\R})\to\hom^0(\R, \ms B)\to\hom^0(\R, \ms C)\to\hom^1(\R, A_{\R})\to0$$
but $\im(\hom^0(\R,\ms B)\to\hom^0(\R, \ms C))=\hom^0(\R, \ms C)$ by the above theorem. By exactness, this means that $\ker(\hom^1(\R, A_{\R})\to0)=0$ so $\hom^1(\R, A_{\R})=0$.
</div>
<h1 id="a-second-question">A Second Question</h1>
<p>In the previous section we showed that all constant sheaves on $\R$ are acyclic <sup id="fnref:10"><a href="#fn:10" class="footnote">7</a></sup>. One can similarly show that, for any fixed $k\ge0$, the sheaf $C^k_{\R,\R}$ of $k$-differentiable functions (where 0-differentiable=continuous) into $\R$ is acyclic. In fact, when you think about, sheaf cohomology to vaguely about measuring the failure of certain local-to-global problems on your space, but $\R$ looks very similarly locally and globally. In fact, given any point in $\R$, you can find arbitrarily small neighborhoods of that point that are homeomorphic to the whole space <sup id="fnref:7"><a href="#fn:7" class="footnote">8</a></sup>. Thinking about this made me wonder the following:</p>
<div class="question">
Are there any non-acyclic (cyclic?) sheaves (of abelian groups) on $\R$?
</div>
<p>I brought this question up to one of my friends, and after talking about it for a while, we found an example <sup id="fnref:8"><a href="#fn:8" class="footnote">9</a></sup> which shows the answer is yes. Quick note: for arbitrary $S\subset X$ and a sheaf $\ms F$ on $X$, we set</p>
<script type="math/tex; mode=display">\ms F(S)=\dirlim_{U\supseteq S}\ms F(U)</script>
<p>where the directed limit is taken over open sets $U\supseteq S$. Hence, elements of $\ms F(S)$ are “germs over $S$.”</p>
<div class="example">
<p>Fix any two distinct points $a,b\in\R$, and let $U=\R\sm\brackets{a,b}$. Let $\Z_U$ be the sheaf on $\R$ given by the sheafification of<sup id="fnref:9"><a href="#fn:9" class="footnote">10</a></sup></p>
<script type="math/tex; mode=display">\ms F(V)=\twocases{\Z_U(U\cap V)}{V\cap U\neq\emptyset}0</script>
<p>where the $\Z_U$ in the definition above is the constant sheaf on $U$ and not the sheaf on $\R$ we are constructing. Let $\Z_{\{a,b\}}$ be a similarly defined sheaf on $\R$ with ${a,b}$ in place of $U$. Then, we get a short exact sequence</p>
<script type="math/tex; mode=display">0\to\Z_U\to\Z_{\R}\to\Z_{\{a,b\}}\to0</script>
<p>where exactness can easily be checked on stalks: for stalks over $x\not\in\{a,b\}$ the sequence becomes $0\to\Z\to\Z\to0\to0$ which is exact, and for stalks over $x\in\{a,b\}$ it becomes $0\to0\to\Z\to\Z\to0$ which is also exact. Looking at cohomology, we get that the following is exact</p>
<script type="math/tex; mode=display">\hom^0(\R,\Z_{\R})\to\hom^0(\R,\Z_{\{a,b\}})\to\hom^1(\R,\Z_U)\text{ which is }\Z\to\Z\oplus\Z\to\hom^1(\R,\Z_U)</script>
<p>This means that $\hom^1(\R,\Z_U)\neq0$ as there’s no surjection $\Z\to\Z\oplus\Z$. Thus, $\Z_U$ is a non-acyclic sheaf on $\R$.</p>
</div>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>or I figure it out; whichever happens first. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>if you are reading this, then I probably succeeded. <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>Moreso than usual, this post is primarly meant for me, so many things won’t be defined as I assume I will still know them whenever I look back on this post in the future. <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>Most (I’ve really only thought about big picture stuff) <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>better in terms of simplicity. As a consequence though, it involves less general concepts/theorems. <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>I’ll basically just fill in the details of the first bullet point of the answer given in the link <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:10">
<p>This isn’t quite true, but it is not hard to show this after what was done in the previous section. <a href="#fnref:10" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>This is kind of the idea behind interval-flasque sheaves on R <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
<li id="fn:8">
<p>in Hartshorne <a href="#fnref:8" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>I must admit that it is possible I am defining things incorrectly here. If you notice a mistake, call me out on it. <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Every now and then, I will find some theorem or exercise that is interesting and “obviously” doable, but despite my best efforts, I just can’t seem to figure out what is going on. Sometimes when this happens, I find myself unable to let go of this problem, and it enters my mathematical backlog, where it will continue to torment me until the day I die1. When I have free headspace, I like to return to these problems in hopes of finally having the insight I need; this post will be my attempt 2 to resolve one of the things that more recently kept me up at night. In particular, we will show that3 or I figure it out; whichever happens first. ↩ if you are reading this, then I probably succeeded. ↩ Moreso than usual, this post is primarly meant for me, so many things won’t be defined as I assume I will still know them whenever I look back on this post in the future. ↩UFDs and Localization2018-08-16T14:37:00+00:002018-08-16T14:37:00+00:00https://nivent.github.io/blog/ufd-localization<p>This post will be a little random. I plan on talking about a couple sufficient criteria for a ring $R$ to be a UFD, and then use (one of) them to show that the group ring $R[\Z^n]$ is a UFD when $R$ is <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>. These criteria will involve the concept of localizing a ring, which is something I have wanted to talk about on this blog for a while now, so let’s start with that.</p>
<h1 id="localization">Localization</h1>
<p>Localization <sup id="fnref:3"><a href="#fn:3" class="footnote">2</a></sup> is just about the nicest algebraic operation <sup id="fnref:2"><a href="#fn:2" class="footnote">3</a></sup> one can apply; although this is not apparent from its definition. In essense, localization gives us a way to invert a subset of elements of a ring $R$. In this way, it is a generalization of the field of fractions of a ring. Before constructing the localization of a ring, we need to know which subsets of elements we can invert. Throughout this post, all rings are commutative with unity.</p>
<div class="definition">
Let $R$ be a ring. A subset $S\subset R$ is called <b>multiplicative</b> if $1\in S$ and $a,b\in S\implies ab\in S$.
</div>
<div class="definition">
Given a multiplicative set $S\subset R$, we can form the <b>localization</b> $\sinv R$ which is the ring
$$\sinv R=\brackets{\left.\frac ab\right| a\in R,b\in S}$$
where addition and multiplication are defined as expected, and we say
$$\frac ab=\frac cd\iff\exists u\in S:0=u(ad-bc)$$
</div>
<div class="exercise">
Prove that addition and multiplication on $\sinv R$ are well-defined and actually do give it a ring structure.
</div>
<p>Note that when $R$ is a domain (and $0\not\in S$), the condition on equality of fractions becomes the usual $a/b=c/d\iff ad=bc$. Note furthermore that $\sinv R\subset\Frac(R)$ and $\Frac(\sinv R)=\Frac(R)$. Finally, $\sinv R=0\iff0\in S$. In order to prevent me from saying something technically false, for the rest of the post, assume that $0\not\in S$ for any multiplicative set we consider.</p>
<div class="remark">
There is a natural map $R\to\sinv R$ given by $r\mapsto r/1$, and this map is injective when $S$ contains no zero divisors (e.g. if $R$ is a domain) as $r/1=0/1\iff\exists u\in S:0=ur$.
</div>
<div class="example">
There are two standard classes of multiplicative sets. These are
<ul>
<li> $S=\{1,x,x^2,\dots\}$ for some $x\in R$. In this case, we write $\sinv R=R\sqbracks{\frac1x}\simeq R[y]/(1-xy)$</li>
<li> $S=R\sm\mfp$ where $\mfp\subset R$ is a prime ideal. In this case we write $\sinv R=R_\mfp$</li>
</ul>
</div>
<p>Note that when $R$ is a domain and $\mfp=(0)$, we get $R_{(0)}=\Frac(R)$. In general, the second example above is particularly prevalent because it let’s you study rings “one prime at a time”. In particular,…</p>
<div class="lemma">
Given an ideal $I\subset R$, the set $\sinv I=I\cdot\sinv R=\brackets{\left.\frac as\right|a\in I,b\in S}$ is an ideal of $\sinv R$.
</div>
<div class="proof4">
Exercise.
<div class="theorem">
$R_\mfp$ has exactly 1 maximal ideal.
</div>
<div class="proof4">
We claim that $\mfp R_\mfp$ is the only maximal ideal of $R_\mfp$. This will follow from showing that every $\mfp R_\mfp$ is literally the set of all non-units in $R_\mfp$. Pick som non-unit $a/s\in R_\mfp$. Then, $a\not\in S$ since otherwise $s/a\in R_\mfp$ would be an inverse. Hence, $a\in R\sm S=\mfp$ so $a/s\in\mfp R_\mfp$. Conversely, if $a/s\in\mfp R_\mfp$, then we can assume that $a\in\mfp$. Suppose that $b/t$ was an inverse to $a/s$; then,
$$\frac{ab}{st}=\frac11\iff\exists u\not\in\mfp:0=u(ab-st)\iff ab-st\in\mfp\iff st\in\mfp$$
which is impossible since $st\in S$, so $a/s$ must not be a unit. Thus, $\mfp R_\mfp$ is the set of all non units, and hence the unique maximal ideal.
</div>
</div>
<div class="definition">
A ring $R$ is called <b>local</b> if it only has 1 maximal ideal.
</div>
<p>In addition to this, localization also respects many properties of the original ring. For example, we have the following theorems.</p>
<div class="theorem">
If $R$ is a domain, then so is $\sinv R$.
</div>
<div class="proof4">
Exercise.
</div>
<div class="theorem">
Let $\mfp\subset R$ be a prime ideal in a domain $R$. Then $\sinv\mfp$ is prime as well.
</div>
<div class="proof4">
If $\mfp\cap S\neq\emptyset$, then $\sinv\mfp=\sinv R$ as it contains units, and so is prime. Hence, suppose that $\mfp\cap S=\emptyset$. Let $x=a/s,y=b/t\in\sinv A$ ($s,t\in S$ and $a,b\in R$) with $xy\in\sinv\mfp$. Then, $\frac{ab}{st}=\frac c{s'}$ for some $c\in\mfp$ and $s'\in S$ which means that $abs'=cst\in\mfp$. Since $s'\not\in\mfp$, either $a\in\mfp$ or $b\in\mfp$ so $x$ or $y$ lies in $\mfp$, making $\mfp$ primes.
</div>
<div class="theorem">
Let $\mfq\subset\sinv R$ be a prime ideal and let $f:R\to\sinv R$ denote the map $f(r)=r/1$. Then, $\mfp:=\inv f(\mfq)$ is a prime ideal of $R$. When $R$ is a domain, we have $\mfp=\mfq\cap R$.
</div>
<div class="proof4">
In general, given two rings $R,R'$ with a map $g:R\to R'$ and a prime ideal $\mfq\subset R'$, $\inv g(\mfq)$ is a prime ideal of $R$ since
$$ab\in\inv g(\mfq)\iff g(a)g(b)\in\mfq\iff g(a)\in\mfq\iff a\in\inv g(\mfq)$$
where I should really say $g(a)$ or $g(b)$ is in $\mfq$ above, but meh.
</div>
<div class="theorem">
$\sinv R$ is a PID when $R$ is a PID.
</div>
<div class="proof4">
Exercise.
</div>
<p>I think this should be everything we need for this post. With that done, when is a ring a UFD?</p>
<h1 id="ufd-criteria">UFD Criteria</h1>
<p>I always hate trying to prove rings are UFDs because my go-to tactics are to prove something stronger (e.g. its a PID), but this clearly isn’t always possible. The definition of a UFD is kind of messy, so it’s nice to have alternative, sufficient conditions for the property. One<sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup> such condition is</p>
<div class="lemma">
Let $R$ be an integral domain. If every element of $R$ is a product of primes, then $R$ is a UFD.
</div>
<div class="proof4">
First, let $\pi$ be an irreducible element, and write $\pi=\tau_1\dots\tau_n$ as a product of primes. Then, since $\pi$ is irreducible, we really have (possibly after rearranging the $\tau_i$) that $\pi=u\tau_1$ for some unit $u$, so $\pi$ is a prime. Thus, every element of $R$ factors into a product of irreducibles. Suppose that
$$u\pi_1\dots\pi_n=U\Pi_1\dots\Pi_N$$
where $u,U$ are units and $\pi_i,\Pi_i$ are irreducibles ($\iff$ primes). We need to show that (possibly after rearrangement) $\pi_i,\Pi_i$ are associates and $n=N$. Well, $\pi_1$ is prime so $\pi_1\mid\Pi_j$ for some $j$. Rearrange to assume that $j=1$, and divide by $\pi_1$ to get that $u\pi_2\dots\pi_n=U'\Pi_2\dots\Pi_N$ for some unite $U'$. The claim then follows by induction, so $R$ is a UFD.
</div>
<div class="theorem" name="Kaplansky's Criterion">
Let $R$ be an integral domain. Then, $R$ is a UFD iff every nonzero prime ideal of $R$ contains a nonzero principal prime ideal (i.e. contains a prime element).
</div>
<div class="proof4">
$(\implies)$ This direction is easy. If $R$ is a UFD and $\mfp$ is a prime ideal, then consider any $x\in\mfp$. We can write $x=\pi_1\dots\pi_n$ as a product of prime elements, and primality of $\mfp$ implies that $(\pi_i)\subset\mfp$ for some $i$, giving us the result.
<br />
$(\impliedby)$ For this direction, let $S$ be the set of all (finite) products or prime elements of $R$ (including the empty product so $1\in S$). Then, $0\not\in S$ and $S$ is clearly multiplicative, so we can form the (nonzero) ring $A=\sinv R$. We claim that $A$ is a field. Pick any nonzero nonunit $r\in R$. Suppose that $1/r\not\in A$, so $(r)\subsetneq A$ meaning that it is contained in some maximal ideal $\mfm\subset A$. Then, $\mfp=\mfm\cap R$ is prime, and hence contains a principal prime ideal $(\pi)$. But this means that $\sinv(\pi)\subset\sinv\mfm\subsetneq A$ which is impossible as $\pi\in S$ so it's definitely a unit in $A$. Thus, $1/r\in A$, so we can write
$$\frac1r=\frac b{\pi_1\dots\pi_n}\in A$$
where $b\in R$ and the $\pi_i\in R$ are all prime. We now prove by induction on $n$ that $r$ is the product of prime elements. If $n=1$, then $rb=\pi_1$ so since primes are irreducible and $r$ is not a unit, $r$ must itself be prime. For $n>1$, we have $rb=\pi_1\dots\pi_n$ so each $\pi_i$ divides $r$ or $b$. If some $\pi_j$ divides $b$, then $b=c\pi_j$, so $rc$ is a product of $n-1$ primes, meaning $r$ is a product of primes by induction. Otherwise, each $\pi_i$ divides $r$ so $\parens{\frac r{\pi_1\dots\pi_n}}b=1$, making $b$ a unit and $r$ a product of primes. Hence, we have that every element of $R$ is a product of primes, and so $R$ is a UFD.
</div>
<div class="cor">
Every PID is a UFD.
</div>
<div class="cor">
Let $R$ be a Dedekind domain. Then, $R$ is a UFD iff $R$ is a PID.
</div>
<p>It is maybe worth mentioning that the above <sup id="fnref:8"><a href="#fn:8" class="footnote">5</a></sup> can be proven without localizing. You replace the supposition that $(r)\subsetneq\sinv R$ with the supposition that $(r)\cap S=\emptyset$ to eventually show that $rb=\pi_1\dots\pi_n$ for some $b$. When doing this, you have to use Zorn’s lemma to show that there’s some prime ideal of $R$ containing $(r)$ that is disjoint from $S$ whereas we get the analagous fact more easily by considering an appropriate maximal ideal of $\sinv R$<sup id="fnref:7"><a href="#fn:7" class="footnote">6</a></sup>.</p>
<p>So UFDs are integral domains where every element is a product of primes or are integral domains where each prime ideal contains a principle prime ideal. To me, these conditions <sup id="fnref:5"><a href="#fn:5" class="footnote">7</a></sup> sound more reasonable to prove than the original unique factorization into irreducibles formulation. As an application of the second of them, we also prove the following.</p>
<div class="theorem">
Let $R$ be a UFD. Then, $\sinv R$ is a UFD.
</div>
<div class="proof4">
Let $\mfq\subset\sinv R$ be a prime ideal, so $\mfp=\mfq\cap R$ is a nonzero prime ideal in $R$. This means that it contains a principal prime $(\pi)\subset\mfp$, so $\pi\in\mfq$. Thus, it suffices to show that $\pi$ is a prime of $\sinv R$. Pick $a/s,b/t\in\sinv R$ and suppose that $\pi\mid\frac{ab}{st}$. Then, $\pi\mid ab$ (multiply by $st$) so $\pi\mid a$ or $\pi\mid b$ ($\pi$ is prime in $R$), but this means that $\pi\mid a/s$ or $\pi\mid b/t$ (since $s,t$ are units). Thus, $\pi$ is prime in $\sinv R$ and $\sinv R$ is a UFD.
</div>
<p>If that’s not a clean proof, then I don’t know what is. If you think about it, the above really proves the following more general theorem. <sup id="fnref:9"><a href="#fn:9" class="footnote">8</a></sup></p>
<div class="theorem">
Let $\phi:A\to R$ be a ring map with $A$ a UFD. Then if
<ul>
<li> $\phi(a)$ is irreducible whenver $a$ is irreducible; and</li>
<li> $\inv\phi(\mfp)\neq(0)$ for all nonzero primes $\mfp\subset R$</li>
</ul>
we must have that $R$ is a UFD.
</div>
<p>If you read the first footnote, then you may be wondering if I’ll prove that $R$ UFD $\implies R[x]$ UFD. I won’t <sup id="fnref:6"><a href="#fn:6" class="footnote">9</a></sup>.</p>
<h1 id="an-interesting-conjecture">An Interesting Conjecture</h1>
<p>First, a crash course on group rings.</p>
<div class="definition">
Let $R$ be a ring and $G$ be a group. The <b>group ring</b> $R[G]$ is the ring of formal sums
$$R[G]=\brackets{\left.\sum_{g\in G}r_g\cdot e_g\right|r_g\in R}$$
where $r_g=0$ for all but finitely many $g\in G$. Multiplication is given by $e_g\cdot e_h=e_{gh}$ for $g,h\in G$, and more generally:
$$\parens{\sum_{g\in G}r_g\cdot e_g}\parens{\sum_{h\in G}s_h\cdot e_h}=\sum_{g\in G}\sum_{h\in H}r_gs_h\cdot e_{gh}$$
The additive identity is the sum 0, and the multiplicative identity is the identity element of the group.
</div>
<div class="remark">
Group rings satisfy the following:
<ul>
<li> $R[G]$ is commutative iff $G$ is abelian. </li>
<li> $R[G]$ has zero divisors if $G$ has a torsion element. Indeed, if $x^n=1$ in $G$, then $(1-e_x)\in R[G]$ is a zero divisor (where by $1$ we technically mean $1e_1$).</li>
<li> For $G=\Z$, we have $R[G]\simeq R[t,\inv t]\simeq R[t,s]/(1-st)$ </li>
</ul>
</div>
<p>Now, I came across Kaplansky’s Criterion while a friend and I were trying to resolve the following:</p>
<div class="question">
Let $k$ be a field. Is the group ring $k[\Z^n]$ a UFD (or even a domain)?
</div>
<p>This turns out to be true with a simple proof once you know that localization preserves UFD-ness.</p>
<div class="theorem">
Let $R$ be a UFD. Then, $R[\Z^n]$ is a UFD.
</div>
<div class="proof4">
It is not hard to see that
$$R[\Z^n]\simeq R[x_1,\inv x_1,\dots,x_n,\inv x_n]$$
We now proceed by induction. The claim is true when $n=0$ since $R[\Z^0]\simeq R$, so suppose that $n>0$ and that $A:=R[x_1,\inv x_1,\dots,x_{n-1},\inv x_{n-1}]$ is a UFD. Then, $A[x_n]$ is a UFD as is its localization $A[x_n]\sqbracks{\frac1{x_n}}$ but clearly, $R[\Z^n]\simeq A[x_n]\sqbracks{\frac1{x_n}}$ so $R[\Z^n]$ is a UFD as claimed.
</div>
<p>At this point, the only remaining question is why would I care about that. Well, one of my friends mentioned the following problem, and I was surprised that this is not already known to be true, so we set about convincing ourselves it held in the simplest possible case: finitely generated abelian groups.</p>
<div class="conj" name="Kaplansky's Conjecture">
Let $K$ be a field, and let $G$ be a torsion-free group. Then, the group ring $K[G]$ does not contain nontrivial zero divisors.
</div>
<p>I should maybe mention that this is only one of Kaplansky’s conjectures, so the name isn’t well-defined.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Depending on how I’m feeling when I get to that part, I might also say why I care about this fact and/or “plug a hole” in this blog by proving that R[x] is a UFD when R is a UFD since I didn’t <a href="../ring-intro">last time</a> I had a chance to. <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>I won’t get into why it’s called this, but it’s related to studying functions defined near a point of a space by looking at the functions that do not vanish at that point. <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>“functor” <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>Two? <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:8">
<p>theorem, not lemma or corollary <a href="#fnref:8" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>The kicker is that proving that any ideal is contained in a maximal one involves using Zorn’s lemma, so we still appeal to Zorn’s lemma; we just do so more implicitly. <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>Or at least the second one <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>I’m not gonna lie: Kaplansky’s Criterion has way more applications than I realized when I started writing this post. Why isn’t it more popular? <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>I mistakingly thought that Kaplansky’s Criterion gave a simple proof of this along the lines of the localization proof: “Pick a prime ideal I of R[x] and intersect it with R to get a prime ideal J of R. Then J contains a prime element p which remains prime in R[x] so I contains the prime element p. Apply Kaplansky and you win.” The issue with this is that J may be the zero ideal of R, in which case you have to go through all the annoyingness of the standard proof that I’d rather avoid writing up. <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>This post will be a little random. I plan on talking about a couple sufficient criteria for a ring $R$ to be a UFD, and then use (one of) them to show that the group ring $R[\Z^n]$ is a UFD when $R$ is 1. These criteria will involve the concept of localizing a ring, which is something I have wanted to talk about on this blog for a while now, so let’s start with that. Depending on how I’m feeling when I get to that part, I might also say why I care about this fact and/or “plug a hole” in this blog by proving that R[x] is a UFD when R is a UFD since I didn’t last time I had a chance to. ↩Orbit-Stabilizer for Finite Group Representations2018-06-06T00:19:00+00:002018-06-06T00:19:00+00:00https://nivent.github.io/blog/orbit-stab-rep<p>One of my professors covered the main result of this post during a class that I missed awhile ago. Using some notes from a friend who attended that class, I want to try to reconstruct the theorem <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>. Experience with representation theory will be useful for this post, but I’ll try to cover enough of the basics so that previous exposure isn’t strictly required.</p>
<h1 id="orbit-stabalizer">Orbit-Stabalizer</h1>
<p>We will begin by continuing our discussion of group actions from <a href="../geo-group">last post</a>. Recall the definition</p>
<div class="definition">
Let $G$ be a group and let $X$ be a set. A <b>(left) group action</b> of $G$ on $X$ is a map $\phi:G\times X\rightarrow X$ satisfying
<ul>
<li> $1\cdot x=x$ for all $x\in X$ where $1\in G$ is the identity</li>
<li> $g\cdot(h\cdot x)=(gh)\cdot x$ for all $x\in X$ and $g,h\in G$</li>
</ul>
where $g\cdot x$ denotes $\phi(g,x)$.
</div>
<p>We sometimes write $G\curvearrowright X$ to denote that $G$ acts on $X$. If $X$ has additional structure (e.g. if $X$ is a vector space), then we require our group action to respect $X$’s structure. In general, a group action $G\curvearrowright X$ is a map $G\rightarrow\Aut(X)$ where the automorphisms of $X$ depend on the context <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>.</p>
<p>Now, in order to (state and) prove Orbit-Stabalizer, we’ll need to know what those words mean.</p>
<div class="definition">
Let $G$ be a group acting on a set $X$. Given some $x\in X$, its <b>orbit</b> is
$$G\cdot x=\{g\cdot x\mid g\in G\}$$
Furthermore, its <b>stabalizer</b> is
$$\Stab(x)=G_x=\{g\in G\mid g\cdot x=x\}$$
</div>
<p>Note that the stabalizer of $x\in X$ is a subgroup of $G$ since $g,h\in G_x\implies(g\inv h)\cdot x=g\cdot x=x$. Furthermore, if $G\cdot x=X$ for some $x\in X$, then we say $G$ acts <b>transitively</b> on $X$.</p>
<p>Finally, if $G\curvearrowright X$, then we call $X$ a <b>$G$-set</b>. Naturally, these spaces have their own notion of homomorphisms.</p>
<div class="definition">
Let $X,Y$ be two $G$-sets. A <b>$G$-map</b> (or <b>$G$-equivariant map</b> or <b>$G$-morphisms</b>) is a map $f:X\rightarrow Y$ s.t. $f(g\cdot x)=g\cdot f(x)$ for all $g\in G$ and $x\in X$. We say $f$ is a <b>$G$-isomorphism</b> if it is bijective.
</div>
<div class="exercise">
Show that if $f$ is a $G$-isomorphism, then $\inv f$ is $G$-equivariant.
</div>
<p>With our definitions set up, we come to</p>
<div class="theorem" name="Orbit-Stabilizer">
Let $X$ be a $G$-set. Fix any $Y\subseteq X$ s.t. $g\cdot Y\cap Y\in\{Y,\emptyset\}$ for all $g\in G$ and $G\cdot Y=\{g\cdot y\mid g\in G,y\in Y\}=X$. Finally, let $H=\Stab(Y)=\{g\in G\mid\forall y\in Y:g\cdot y\in Y\}$. Then,
$$X\simeq\bigsqcup_{\sigma_i\in G/H}\sigma_iY$$
as $G$-sets where the union is taken over coset representatives of $G/H$ and $\sigma_iY=\{\sigma_i y\mid y\in Y\}$ (note: $\sigma_iy$ is just a formal symbol) and $G$ acts on it via $g\cdot(\sigma_iy)=\sigma_j(h\cdot y)$ for the unique $\sigma_j,h$ s.t. $g\sigma_i=h\sigma_j$.
</div>
<div class="proof4">
Let $f:\bigsqcup_{\sigma_i\in G/H}\sigma_iY\to X$ be the map $f(\sigma_iy)=\sigma_i\cdot y$. This map is $G$-equivariant since
$$f(g\cdot\sigma_iy)=f(\sigma_j(h\cdot y))=\sigma_j\cdot(h\cdot y)=(\sigma_jh)\cdot y=(g\sigma_i)\cdot y=g\cdot(\sigma_i\cdot y)=g\cdot f(\sigma_iy)$$
where $g\sigma_i=\sigma_jh$. For injectivity, if $\sigma_i\cdot y=\sigma_j\cdot y'$, then
$$(\inv\sigma_j\sigma_i)\cdot y=y'\implies\inv\sigma_j\sigma_i\in H\implies\sigma_iH=\sigma_jH\implies\sigma_i=\sigma_j\implies y=y'$$
where the second-to-last implication comes from the fact that we fixed our coset representatives ahead of time. Finally, for surjectivity, fix any $x\in X$. Since $G\cdot Y=X$, there exists $g\in G$ and $y\in Y$ s.t. $g\cdot y=x$. Thus, writing $g=\sigma_jh$, we have that $f(\sigma_j(h\cdot y))=x$.
</div>
<div class="cor">
Let $X$ be a $G$-set, and fix any $x\in X$. Then, $|G\cdot x|=|G:G_x|=|G|/|G_x|$
</div>
<div class="proof4">
Apply the above theorem to the $G$-set $G\cdot x$ where $Y=\{x\}$.
</div>
<p>It’s worth noting that Orbit-Stabilizer usually only refers to the corollary above, but this stronger version is closer to our main theorem.</p>
<h1 id="a-quick-intro-to-representations-of-finite-groups">A Quick Intro to Representations of Finite Groups</h1>
<p>Now that we’ve seen Orbit-Stabilizer, we’ll need to introduce some definitions from representation theory.</p>
<div class="definition">
Fix a group $G$ and a vector space $V$ over a field $\F$. A <b>(linear) representation of $G$</b> is a map $\rho:G\rightarrow\GL_{\F}(V)$. Given such a map, we call $V$ a <b>$G$-rep</b>, and morphisms of $G$-reps are $G$-equivariant linear maps. Finally $\theta:G\to\GL_{\F}(U)$ is a <b>subrepresentation</b> if $U\subseteq V$ and $\theta(g)=\rho(g)\mid_U$ for all $g\in G$.
</div>
<p>When studying linear representations of groups, there are two main perspectives one can take. Everything can be done in terms of an explicit representation (i.e. the map $\rho$ above) or in terms of modules over the group ring. Since I haven’t talked about modules on this blog before <sup id="fnref:4"><a href="#fn:4" class="footnote">3</a></sup>, I’ll stick to the explicit representation approach and leave exercises to translate things into statements about modules for the interested reader.</p>
<p><span class="exercise">
Prove that a linear representation of $G$ is the same thing as an $\F[G]$-module. <sup id="fnref:5"><a href="#fn:5" class="footnote">4</a></sup>
</span></p>
<p>Thankfully, we don’t need a lot of representation theory for the main result of this post. We only need to know a few different types of linear representations. Also, in case I ever forget to mention this, for the rest of this post, assume all vector spaces are finite-dimensional and assume that all groups are finite.</p>
<div class="definition">
A <b>permutation representation</b> of $G$ on a finite-dimensional $\F$-vector space $V$ is a linear representation $\rho:G\rightarrow\GL(V)$ in which the elements of $G$ act by permuting some basis $B=\{b_1,\dots,b_n\}$ for $V$.
</div>
<div class="example">
Consider the symmetric group $S_n$ acting on $\C^n=\bigoplus_{i=1}^n\C e_i$ via $\sigma\cdot e_i=e_{\sigma(i)}$.
</div>
<div class="example">
Let $G$ be any finite group, and consider $\C[G]\simeq\bigoplus_{g\in G}\C g$ as vector spaces. This is the <b>regular representation</b> when $G$ acts via $h\cdot g=hg$ on the basis.
</div>
<p>Finally, we need the notion of induced representations. This let’s you take a representation of a group $H$ and canoncially construct a representation of a larger group $G\supseteq H$. The construction is very reminiscent of the Orbit-Stabilizer theorem.</p>
<div class="construction">
Let $H\le G$ be a subgroup of $G$, and let $V$ be an $H$-rep. Fix a complete set of coset representatives $\sigma_1=e,\dots,\sigma_n\in G$ s.t. $G/H=\{\sigma_iH:0\le i\le n\}$ and $n=|G/H|$. Then, as a vector space, the <b>induced representation</b> from $H$ to $G$ is
$$\Ind_H^GV=\bigoplus_{i=1}^n\sigma_iV$$
where $\sigma_iV=\{\sigma_iv\mid v\in V\}$ is a space of formal symbols. This is given a $G$-action as follows: given some $\sigma_iv\in\Ind_H^GV$, there's a unique $\sigma_j$ and $h\in H$ s.t. $g\sigma_i=\sigma_jh$. We define $g\cdot\sigma_iv=\sigma_j(h\cdot v)$.
</div>
<div class="exercise">
Prove that, as $\F[G]$-modules, we have
$$\Ind_H^GV\simeq\F[G]\otimes_{\F[H]}V$$
so induction is really just extension of scalars.
</div>
<div class="example">
The regular representation is $\Ind_1^G\F$ where $1$ denotes the trivial group and $G$ acts trivially (i.e. by the identity) on $\F$.
</div>
<h1 id="orbit-stabilizer-v2">Orbit-Stabilizer v2</h1>
<p>This is where we’ll prove the main result, which roughly says that (almost-)permutation representations are induced representations.</p>
<div class="theorem" name="Orbit-Stabilizer Variation">
Let $V$ be a $G$-rep with a decomposition $V\simeq\bigoplus_{i=0}^nV_i$ as a vector space s.t. for all $i,j\in\{0,\dots,n\}$, there exists a $g\in G$ s.t. $g\cdot V_i=V_j$, and let $H=\Stab(V_0)$. Then,
$$V\simeq\Ind_H^GV_0$$
</div>
<div class="proof4">
We will show this by constructing an explicit isomorphism. Let $f:\Ind_H^GV_0\rightarrow V$ be the map given by
$$f(\sigma_iv_0)=\sigma_i\cdot v_0$$
This is easily seen to be $G$-equivariant, and it is linear by construction. For surjectivity, it suffices to find preimages for elements of the form $v_i\in V_i$. Given such an element, there exists some $g_i\in G$ and $w_i\in V_0$ s.t. $g_i\cdot w_i=v_i$. Now, we can write $g_i=h_i\ith\sigma_j$ for a unique $h_i\in H$ and coset representative $\ith\sigma_j$. Doing so gives us that $f(\ith\sigma_j(h_i\cdot w_i))=v_i$ so $f$ is surjective as claimed. Finally, we need to show that $f$ is injective, so fix some $w=\sum_{\sigma_i\in G/H}\sigma_i\ith v_0\in\ker f$. This means that $\sum_{\sigma_i\in G/H}\sigma_i\cdot\ith v_0=0$, but we claim that $\sigma_i\cdot\ith v_0$ and $\sigma_j\cdot\Ith vj_0$ belong to different summands (i.e. different $V_i$'s) which forces $\sigma_i\cdot\ith v_0=0\implies\ith v_0=0$ for all $i$ so $w=0$. To prove the claim, suppose that $\sigma_i\cdot\ith v_0,\sigma_j\cdot\Ith vj_0\in V_k$ for some $k$. Then,
$$\inv\sigma_j\sigma_i\cdot\ith v_0\in V_0\implies\inv\sigma_j\sigma_i\in H\implies\sigma_j=\sigma_i$$
and we win.
</div>
<p>This wasn’t the proof I had in mind. I imagined (and still do) that it was possible to directly apply the original orbit-stabilizer by letting $X$ be a (well-chosen) basis for $V$ and $Y$ be a (well-chosen) basis for $V_0$. However, in trying to make this work, I ran in to issues getting a well-defined action of $G$ on $B$. Basically, $H=\Stab(V_0)$ can act nontrivially so it’s possible that $h\cdot B_0\not\subseteq B$ which is troublesome. I still hold out hope that this idea can be salvaged in general<sup id="fnref:6"><a href="#fn:6" class="footnote">5</a></sup>, so</p>
<div class="exercise">
See if you can come up with a proof of the above that applies the original Orbit-Stabilizer theorem (e.g. apply it to a basis of V and then extend linearly). If you can, let me know.
</div>
<p>Even though the proof is a little unsatisfying, we have proven what we set out to prove, so let’s end with a couple examples. $\newcommand{\trv}{\underline{\text{Trv}}}\newcommand{\alt}{\underline{\text{Alt}}}$</p>
<div class="example">
Consider $S_n\curvearrowright\Sym^2\C^n$ where $\C^n=\bigoplus\C e_i$ and $S_n$ acts by permuting the $e_i$. Restricting this action to the basis $B=\{e_ie_j:i,j\in\{1,\dots,n\}\}$, we see there are two $S_n$-orbits
$$\begin{matrix}
B_0 &=& \brackets{e_ie_j:i\neq j} && \Stab(\C e_1e_2) &=& S_2\times S_{n-2}\\
B_1 &=& \brackets{e_1^2,\dots,e_n^2} && \Stab(\C e_1^2) &=& S_{n-1}
\end{matrix}$$
Thus we can write $\Sym^2\C^n=V\oplus W$ where $V=\C B_0=\bigoplus_{i\neq j}\C e_ie_j$ and $W=\C B_1=\bigoplus_{i=1}^n\C e_i^2$. Furthermore, $S^n$ acts transitively on these decompositions of $V,W$ so applying our theorem ($V_0=\C e_1e_2$ and $W_0=\C e_1^2$) yields
$$\Sym^2\C^n\simeq\parens{\Ind_{S_2\times S_{n-2}}^{S_n}\trv\otimes\trv}\oplus\parens{\Ind_{S_{n-1}}^{S_n}\trv}$$
where $\trv$ is the trivial 1-dimensional $S_k$ representation sending each element to the number 1.
</div>
<div class="example">
This time, let's look at $S_n\curvearrowright\parens{\Wedge^2\C^n}\otimes\C^n$ where $\C^n=\bigoplus e_i$ and $S_n$ again acts by permuting the $e_i$. We have a basis $B=\{(e_i\wedge e_j)\otimes e_k:i,j,k\in\{1,\dots,n\},i< j\}$ but it's not fixed by $S_n$ (e.g. $(12)\cdot(e_1\wedge e_2)\otimes e_3=(e_2\wedge e_1)\otimes e_3\not\in B$), so we'll look instead at the spanning set $B'=\{(e_i\wedge e_j)\otimes e_k:i,j,k\in\{1,\dots,n\},i\neq j\}$ which is fixed by $S_n$. This has the following orbits
$$\begin{matrix}
B_0 &=& \brackets{(e_i\wedge e_j)\otimes e_k:i\neq j\neq k} && \Stab(\C (e_1\wedge e_2\otimes e_3)) &=& S_2\times S_{n-3}\\
B_1 &=& \brackets{(e_i\wedge e_j)\otimes e_k:i\neq j,k\in\{i,j\}} && \Stab(\C(e_1\wedge e_2\otimes e_1)) &=& S_{n-2}
\end{matrix}$$
It's worth noting that $(12)\cdot(e_1\wedge e_2)\otimes e_1=-(e_1\wedge e_2)\otimes e_2$ so we can switch whether $k=i$ or $k=j$ in $B_1$ above. Applying our theorem to (the span of) each orbit and summing them up, we get that
$$\Wedge^2\C^n\otimes\C^n\simeq\parens{\Ind_{S_2\times S_{n-3}}^{S_n}\alt\otimes\trv}\oplus\parens{\Ind_{S_{n-2}}^{S_n}\trv}$$
where $\alt$ is the alternating 1-dimensional $S_k$ representation sending each element to its sign.
</div>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>which, unsurprisingly, is a version of Orbit-stabilizer for representations of finite groups <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>for X a set, they are (self) bijections <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>but really should at some point <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>This includes proving that $\F[G]$-linear maps are $G$-equivariant and that submodules correspond to subrepresentations <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>It certainly can be in the case that H does indeed act trivially (or at least stabalizes the basis)… Question: is there always a basis B_0 s.t. Stab(V_0) is contained in Stab(B_0)? <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>One of my professors covered the main result of this post during a class that I missed awhile ago. Using some notes from a friend who attended that class, I want to try to reconstruct the theorem 1. Experience with representation theory will be useful for this post, but I’ll try to cover enough of the basics so that previous exposure isn’t strictly required. which, unsurprisingly, is a version of Orbit-stabilizer for representations of finite groups ↩Groups Aren’t Abstract Nonsense2018-03-26T02:08:00+00:002018-03-26T02:08:00+00:00https://nivent.github.io/blog/geo-group<p>I’ve recently been skimming through this book called <a href="https://press.princeton.edu/titles/11042.html">“Office Hours with a Geometric Group Theorist”</a> which, perhaps unsurprisingly, is about using geometric objects to study groups. It mostly focuses on how group actions on graphs and metric spaces can reveal information about the group<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>, and contains some pretty nice results. Unfortunately, I have too many in mind for one post, but I would still like to introduce the basic notions of the subject and a few results I enjoyed. I imagine this will be a lengthy post with a mix of introducing theory and neat applications<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>.</p>
<h1 id="groups-actions-">Groups Actions <sup id="fnref:9"><a href="#fn:9" class="footnote">3</a></sup></h1>
<p>While I didn’t emphasize this much then, towards the beginning of my <a href="../group-intro">intro post</a> on group theory, I mentioned that groups often “perform some action on an object.” Here is where we’ll get to see what I meant and why this is useful.</p>
<div class="definition">
Let $G$ be a group and $X$ a set. A <b>(left) group action</b> of $G$ on $X$ (sometimes denoted $G\curvearrowright X$, pronounced "$G$ acts on $X$") is a function $G\times X\rightarrow X$ where the image of $(g,x)$ is denoted by $g\cdot x$ satisfying
<ul>
<li> $1\cdot x=x$ for all $x\in X$ where $1\in G$ is the identity</li>
<li> $g\cdot(h\cdot x)=(gh)\cdot x$ for all $x\in X$ and $g,h\in G$</li>
</ul>
</div>
<p>In essence, a group action is an action of $G$ on $X$ (i.e. a function $G\times X\rightarrow X$) that respects the group structure of $G$. Above, $X$ is just a set so this is all we require. When we later look at actions on graphs or metric spaces, we’ll further require that $G$’s actions preserves $X$’s structure<sup id="fnref:3"><a href="#fn:3" class="footnote">4</a></sup>.</p>
<p>One of the most basic examples of a group action comes from symmetric groups.
<span class="definition">
Given a set $X$, its <b>symmetric group</b> $S_X$ is $S_X=\{f:X\rightarrow X\mid f\text{ bijective}\}$ with composition as the group operation. This has a natural action on $X$; namely $f\cdot x=f(x)$.
</span>
If you want, you can verify that this gives a group action, but it is pretty tautological. When $X$ is just a set, there’s no additional structure to preserve so we think of its symmetries as just being permutations; hence the name of the above group. When you think about it, for any group action $G\curvearrowright X$, each $g\in G$ induces a function $X\rightarrow X$ and this function turns out to be a permutation. Thus, we have the following
<span class="exercise">
Prove that a group action $G\curvearrowright X$ is the same thing as a group homomorphism $G\rightarrow S_X$. If this homomorphism is injective, then we say that the action is <b>faithful</b>.
</span>
As a corollary to this exercise, we get the following
<span class="theorem" name="Cayley">
Every group is isomorphic to a subset of a symmetric group.
</span>
<span class="proof4">
Let $G$ be a group. By the above exercise it suffices to show that $G$ acts faithfully on some set $X$. We can simply take $X=G$ with action given by left multiplication (i.e. $g\cdot h=gh$). This action is faithful since every $g\in G$ acts differently on the identity, so we win.
</span>
When studying general group actions, there are two basic concepts of extreme importance.
<span class="definition">
Let $G$ be a group acting on a set $X$. Given some $x\in X$, its <b>orbit</b> is
<script type="math/tex">G\cdot x=\{g\cdot x\mid g\in G\}</script>
Furthermore, its <b>stabalizer (or stabalizer subgroup)</b> is
<script type="math/tex">G_x=\{g\in G\mid g\cdot x=x\}</script>
</span>
Personally, I like to think of orbits and stabalizers in terms of graphs. Given an action $G\curvearrowright X$, you can form a graph with vertex set $X$, such that for each $(x,g)\in X\times G$ you get an edge from $x$ to $g\cdot x$. In this language, orbits are connected components of this graph and (elements of) stabalizers correspond to self edges. <sup id="fnref:4"><a href="#fn:4" class="footnote">5</a></sup></p>
<p>The following two theorems <sup id="fnref:17"><a href="#fn:17" class="footnote">6</a></sup> are good to know for general group action knowledge, but I do not believe either will be used in this post, so I won’t bother proving them here.
<span class="theorem" name="orbit-stabilizer">
Let $G$ be a group acting on $X$. Fix some $x\in X$. Then,
<script type="math/tex">|G\cdot x|=|G:G_x|=\frac{|G|}{|\Stab(x)|}</script>
</span>
<span class="lemma" name="Burnside">
Let $G$ be a finite group acting on a set $X$. For $g\in G$, let $X^g=\{x\in X\mid g\cdot x=x\}$ be the elments fixed by $g$, and let $X/G$ denote the (set-theoretic) quotient of $X$ by the equivalence relation $x\sim y\iff x\in G_y$. Then,
<script type="math/tex">|X/G|=\frac1{|G|}\sum_{g\in G}|X^g|</script>
</span>
To finish this section, I’ll mention a few definitions that should come up in this post. There are a few types of group actions. In the first exercise above, I defined what a faithful action is already; two more of importance are transitive and free actions.
<span class="definition">
A group action $G\actson X$ is called <b>transitive</b> if $G\cdot x=X$ for some (equivalently, all) $x\in X$. The action is <b>free (or $G$ acts freely on $X$)</b> if $\Stab(x)$ is trivial for all $x\in X$ (i.e. no non-trivial element of $G$ fixes any element of $X$)
</span>
We’ll see that free actions in particular reveal algebraic information about the group involved.</p>
<h1 id="graphs">Graphs</h1>
<p>If we’re gonna study groups through their actions, then we’re gonna need objects for them to act on. Sets are all well and good, but are too unrestrictive. The more structure we require on the objects we act on, the more we will have at our disposal to gain information from.</p>
<p>To begin, we will remind the reader of what a graph is, and then introduce how groups act on them.
<span class="definition">
A <b>graph</b> $G=(V,E)$ is pair consisting of a set $V=V(G)$ or <b>vertices</b> and a set $E=E(G)\subseteq V\times V$ of <b>edges</b>. If $(u,v)\in E\iff(v,u)\in E$ for all $u,v\in V$, then we say $G$ is <b>undirected</b>. If $(v,v)\not\in E$ for all $v\in V$, then we say $G$ is <b>simple</b>.
</span></p>
<p><span class="exercise">
If you don’t already know them, look up definitions for paths, connected graphs, and trees. <sup id="fnref:6"><a href="#fn:6" class="footnote">7</a></sup>
</span></p>
<p>Graphs are hardly ever written down in terms of explicit vertex and edge sets. More commonly, they are given as some drawing.</p>
<p><span class="example">
Below is an example of an undirected (but not simple) graph
<center><img src="https://nivent.github.io/images/blog/geo-group/graph.png" width="600" height="100" /></center>
</span>
To define a group action on a graph, we need a notion of an (invertible) structure preserving map. This will give us a notion of when two graphs are the same, and the different ways in which we can view a graph as being the same as itself<sup id="fnref:5"><a href="#fn:5" class="footnote">8</a></sup> will be what we call it’s symmetries.
<span class="definition">
Given two graphs $G,H$, a <b>graph isomorphism</b> is a bijection $f:V(G)\rightarrow V(H)$ s.t. <script type="math/tex">(u,v)\in E(G)\iff(f(u),f(v))\in E(H)\text{ for all }u,v\in V(G)</script>
A graph ismorphism $f:V(G)\rightarrow V(G)$ bewteen a graph and itself is called an <b>automorphism</b>.
</span>
<span class="exercise">
Given a graph $G$, let $\Aut(G)$ denote the set of automorphisms of $G$. Prove that this set forms a group under composition.
</span>
<span class="example">
Let $K_n$ denote the complete graph on $n$ vertices (i.e. every vertex is connected to every other vertex). Then, every vertex is interchangable, so any rearrangement of vertices gives an isomorphism. Hence $\Aut(K_n)\simeq S_n$.
</span>
<span class="exercise">
$K_{n,m}$ is the graph with vertex set $V=V_1\sqcup V_2$ s.t. $|V_1|=m$, $|V_2|=n$, and $E=V_1\times V_2\cup V_2\times V_1$. e.g. $K_{3,2}$ is pictured below
<center><img src="https://nivent.github.io/images/blog/geo-group/k32.png" width="300" height="100" /></center>
Calculate $\Aut(K_{n,m})$ when $n\neq m$ and $\Aut(K_{n,n})$ for all $n,m$.
</span>
Now that we’re comfortable with graphs and their isomorphisms, we finally define
<span class="definition">
A <b>group action on a graph</b> is a group homomorphism $G\rightarrow\Aut(\Gamma)$ where $G$ is a group and $\Gamma$ is graph.
</span>
<span class="exercise">
Let $G\actson\Gamma:G\rightarrow\Aut(\Gamma)$ be a group action with $\Gamma$ a graph. Show that this is the same thing as a group action $G\actson V(\Gamma)$ s.t. $(v,u)\in E(\Gamma)\iff(g\cdot v,g\cdot u)\in E(\Gamma)$.
</span>
If the above exercise seems obvious, then that’s a good sign that we’ve made good definitions. Admittedly, groups acting on graphs don’t really make an appearance in any of the applications I want to talk about today (although they will appear in the “further reading” section at the end), but this is still something one should know. As our only real use of this, we’ll prove a slightly strong version of Cayley’s theorem.
<span class="definition">
Let $G$ be a group with a generating set $S$. Its <b>Cayley Graph</b> (with respect to $S$) is the graph $\Gamma(G,S)=(V,E)$ where $V=G$ and $E=\{(g,gs)\mid\forall g\in G\text{ and }s\in S\}$.
</span>
<span class="example">
The Cayley Graph for $S_3$ with generating set $S={(12),(23)}$ is pictured below (blue edges are $(12)$ and red edges are $(23)$)
<center><img src="https://nivent.github.io/images/blog/geo-group/s3.png" width="300" height="100" /></center>
</span>
<span class="theorem" name="Cayley">
Every group is isomoprhic to the automorphism group of some graph.
</span></p>
<div class="proof4">
Fix a group $G$ with a generating set $S$, and let $\Gamma=\Gamma(G,S)$ be its Cayley graph. We will show in particular that $G\simeq\Aut(\Gamma(G,S))$. Now, we clearly have a homomorphism $G\rightarrow\Aut(\Gamma)$ given by $g\cdot v_h=v_{gh}$ where to avoid confusition we denote the vertex set of $\Gamma$ by the symbols $\{v_g\mid g\in G\}$. This action is faithful because, letting $e\in G$ denote the identity, $g\cdot v_e\neq h\cdot v_e$ for $g\neq h$. Hence, we only need to show all automorphisms arise in this way.<br />
To make this part of the proof simple, we'll need to impose one further restriction on our automorphisms: they must preserve edge labels (e.g. $(g,gs)\in E\iff(\phi(g),\phi(g)s)\in E$ for $\phi\in\Aut(\Gamma)$ where $s\in S$ is regarded as the label of that edge). Now, consider an arbitrary $\phi\in\Aut(\Gamma)$ and write $\phi(v_e)=v_g$ where $e$ is the identity. Fix some vertex $v_h\in V(\Gamma)$. We will show that $\phi(v_h)=\phi_g(v_h):=v_{gh}$ by inducting on the length of the shortest (not necessarily directed) path from $v_e$ to $v_h$. This obviously holds when the path has length 0, so suppose the path has length $n>0$, so we can write $h=ws$ where there's a path of length $n-1$ from $v_e$ to $v_w$ and $s\in S$. By our inductive hypothesis, $\phi(v_w)=\phi_g(v_w)=v_{gw}$ so $\phi$ carries the edge $(v_w,v_h)$ to the edge $(v_{gw},v_{gws})$ to $\phi(v_h)=v_{gws}=v_h$ and hence $\phi=\phi_g$, proving the claim.
</div>
<p><span class="exercise">
When inducting in the above proof, we claim that $s\in S$, but in actuality, it’s also possible that $\inv s\in S$ instead. Finish the proof by handling this case.
</span></p>
<p><span class="exercise">
In the above proof, we assume that automorphisms preserve edge labels. Does the theorem still hold without this extra assumption (hint: <sup id="fnref:7"><a href="#fn:7" class="footnote">9</a></sup>)?
</span></p>
<p>This realizes every group as the symmetries of some graph. Hence, even the most abstract groups have some kind of concrete realization.</p>
<h1 id="metric-spaces">Metric Spaces</h1>
<p>We’re almost to the first nice result; I promise. Before we can state it though, I need to introduce the concept of a Metric space and how they’re acted on by groups. Intuitively, a metric space is anywhere you have some notion of distance.</p>
<div class="definition">
A <b>metric space</b> $(X,d)$ is a set $X$ together with a function $d:X\times X\rightarrow\R_{\ge0}$ satisfying the following:
<ul>
<li> positive definitedness: $d(x,x)\ge0$ for all $x\in X$ with equality iff $x=0$</li>
<li> symmetry: $d(x,y)=d(y,x)$ for all $x,y\in X$</li>
<li> triangle inequality: $d(x,y)\le d(x,z) + d(z,y)$ for all $x,y,z\in X$</li>
</ul>
</div>
<div class="example">
There are plenty of examples of metric spaces already familiar to you
<ul>
<li> Euclidean space $\R^n$ with the Euclidean metric
$$d((x_1,\dots,x_n),(y_1,\dots,y_n))=\sqrt{\sum_{i=1}^n|x_i-y_i|^2}$$
</li>
<li> Euclidean space with the taxicab metric
$$d((x_1,\dots,x_n),(y_1,\dots,y_n))=\sum_{i=1}^n|x_i-y_i|$$
</li>
<li> Any graph $\Gamma$ (really just its vertex set $V(\Gamma)$) with the path metric, where the distance bewteen any two vertices is the length of the shortest path between them.
</li>
</ul>
</div>
<p>The next example is of particular importance, and it also a bit surprising at first.</p>
<div class="example">
Let $G$ be a group with generating set $S$ and Cayley graph $\Gamma=\Gamma(G,S)$. Then, we can view $G$ also as a metric space with metric given by the path metric on $\Gamma$ (technically, $V(\Gamma)$)!
</div>
<p>There’s actually an alternative way to think of this metric <sup id="fnref:8"><a href="#fn:8" class="footnote">10</a></sup>. Let $\inv S=\{\inv s\mid s\in S\}$ and define the <strong>word length</strong> of an arbitrary $g\in G$ to be the length of the shortest word in $S\cup\inv S$ that is equal to $g$. Then, we can turn $G$ into a metric space where the distance between $g,h\in G$ is the word length of $\inv gh$.</p>
<div class="exercise">
Show that the word length of $g\in G$ with respect to $S$ is the length of the shortest path from $v_e$ to $v_g$ in $\Gamma(G,S)$. Furthermore, show that this word length construction and the Cayley graph construction give $G$ the same metric (in the sence that $d_\text{word length}(g,h)=d_\text{Cayley}(g,h)$)
</div>
<div class="exercise">
Show that the word metric on $\Z^2$ is the same as the taxicab metric.
</div>
<p>Naturally, we have a notion of a symmetry of a metric space coming from ways of viewing a metric space as equivalent to itself.</p>
<div class="definition">
Let $(X,d_X), (Y,d_Y)$ be two metric spaces. An <b>isometry</b> $f:X\rightarrow Y$ is bijective function s.t.
$$d_X(x,y)=d_Y(f(x),f(y))$$
for all $x,y\in X$. Furthermore, the set of isometries from $X\rightarrow X$ forms a group denoted $\DeclareMathOperator{\Isom}{Isom}\Isom(X)$.
</div>
<div class="definition">
A <b>group action on a metric space</b> is a group homomorphism $G\rightarrow\Isom(X)$ where $G$ is a group and $X$ a metric space
</div>
<div class="exercise">
Give an equivalent formulation of groups acting on metric spaces
</div>
<div class="example">
<ul>
<li> $\zmod n$ acts on $\R^2$ with the Euclidean metric via rotations </li>
<li> $\Z^n$ acts on $\R^n$ with the Euclidean or taxicab metrics by translations </li>
<li> $S^4$ acts on a tetrehdron by permuting its vertices </li>
<li> Every group acts on itself with the word metric by left multiplication </li>
</ul>
</div>
<p>Finally, let’s see how actions can be used to reveal algebraic information about groups.</p>
<h1 id="first-neat-application">First Neat Application</h1>
<p>Before I introduce our first real application of these ideas, I need to introduce one more definition.</p>
<div class="definition">
Let $G$ be a group. An element $g\in G$ is said to be <b>torsion</b> if it has finite order. If $G$ has no (non-trivial) torsion elements, then we say that $G$ is <b>torsion-free</b>
</div>
<p>Now, our goal of this section will be to prove the following theorem</p>
<div class="theorem">
If $G$ acts freely on $\R^n$ with the Euclidean metric, then $G$ is torsion-free
</div>
<p>Take a moment to let that sink in; from a almost purely geometric situation (a group acting on a metric space), we can make a non-trivial algebraic conclusion. This is a central theme of Geometric Group Theory, and more examples of this will be seen in this post<sup id="fnref:10"><a href="#fn:10" class="footnote">11</a></sup>. For this particular theorem, the proof relies on the following lemma</p>
<p><span class="definition">
Let $S\subseteq\R^n$ be some collection of points. A <b>centroid</b><sup id="fnref:11"><a href="#fn:11" class="footnote">12</a></sup> of $S$ is a point $w\in\R^n$ minimizing
<script type="math/tex">\sum_{s\in S}d(w, s)^2</script>
Note that $w$ may not exist if $|S|=\infty$
</span></p>
<div class="lemma">
Any finite set in $\R^n$ has a unique centroid
</div>
<div class="proof4">
Let $S=\{\Ith s1,\dots,\Ith sm\}$ be a subset of $\R^n$, and let $f:\R^n\rightarrow\R$ be defined by $f(x)=\sum_{i=1}^nd(x,\ith s)^2$. Then, write $x=(x_1,\dots,x_n)$ and $\ith s=(\ith s_1,\dots,\ith s_n)$, so
$$\pderiv f{x_j}=\sum_{i=1}^n2(x_j-\ith s_j)\implies\grad f=2\sum_{i=1}^n(x-\ith s)$$
From this we wee that the unique point the gradient vanishes at is $f=\frac1n\sum_{i=1}^n\ith s$, so this is a critical point. Proving that this is a (global) minimum is left as an exercise. The lemma follows.
</div>
<p>With that Lemma, our theorem will actually have a fairly short proof</p>
<div class="theorem">
Let $G$ be a group acting freely on $\R^n$ with the Euclidean metric. Then, $G$ is torsion-free.
</div>
<div class="proof4">
Let $g\in G$ be any torsion element, and fix some point $x\in\R^n$. Let $\mc O=\{x,g\cdot x,g^2\cdot x,\dots,g^m\cdot x\}$ be the orbit of $x$ under $\gen g$. Since $\mc O$ is finite, we can apply the lemma to find a centroid $y\in\R^n$. Since $g$ acts by an isometry, $g\cdot\mc O$ must have $g\cdot y$ as a centroid, but $g\cdot\mc O=\mc O$! Hence, $g\cdot y=y$ so $g\in\Stab(y)$. Since we assumed that $G$ acts freely, $g$ must be the identity so $G$ is torsion-free.
</div>
<h1 id="free-groups-and-presentations">Free Groups and Presentations</h1>
<p>For the next application I want to cover, we need a little more group theory. Specifically, we need to define free groups, and since it would be a shame to introduce free groups without introducing presentations, we include them here too. The idea behind free groups is that they’re essentially the bare minimum of what’s needed to call something a group; they don’t satisfy any non-trivial relations. We will begin with the definition of a free group, and then give a “categorical” (or “universal”) characterization of them.</p>
<div class="definition">
Let $S$ be an arbitrary set, and let $\inv S=\{\inv s\mid s\in S\}$ (note: $S\cap\inv S=\emptyset$). The <b>free group on $S$</b> is the set $F(S)$ of (reduced) words in $S\cup\inv S$ with group operation given by reduced concatnation (e.g. write $ab\inv ba$ as $a^2$). The <b>rank</b> of $F(S)$ is $|S|$ and $|S|=|T|\implies F(S)\simeq F(T)$. The free group on an $n$-element set is denoted $F_n$.
</div>
<div class="exercise">
Verify that the above actually defines a group.
</div>
<div class="example">
$F_1\simeq F(\{a\})$ is the set of words in $a$. Any such word can be written as $a^n$ for some $n\in\Z$ so the map $a^n\mapsto n$ givens an isomorphism $F_1\simeq\Z$
</div>
<div class="example">
$F_2\simeq F(\{a,b\})$ is the group of words in $\{a,b\}$ so a typical element might look like $a^2\inv bab^{-3}a^3b$. Note that $ab\neq ba$ so $F_2$ is non-abelian. We do have a surjective homomorphism $F_2\rightarrow\Z^2$ given by $a\mapsto(1,0)$ and $b\mapsto(0,1)$
</div>
<p>Free groups can take a while to wrap your head around. I remember I used to be enamored with group theory because from so few axioms (only like 3 or 4), you were guaranteed to get an object that was really well-behaved with so much structure, but that all ended when I learned about free groups<sup id="fnref:14"><a href="#fn:14" class="footnote">13</a></sup>. Free groups, and by extension general (non-abelian) groups <sup id="fnref:12"><a href="#fn:12" class="footnote">14</a></sup>, are trash; they can have too much freedom and/or unintuitive structure. It’s not all bad though; this makes results involving them feel extra interesting.</p>
<div class="exercise">
Construct an explicit embedding $F_3\hookrightarrow F_2$. If that's too easy then construct an embedding $F_4\hookrightarrow F_2$.
</div>
<div class="exercise">
The previous exercise shows that you can construct an injection homomorphism $F_2\hookrightarrow F_2$ that is not surjective. Is the opposite possible? Construct an example of a surjective homomorphism $F_2\twoheadrightarrow F_2$ with non-trivial kernel, or prove non exist
</div>
<p>This construction of free groups is nice (and necessary), but we could alternatively choose to characterize free groups in terms of a so-called universal property. This has the advantage of including only the defining properties of a free group without tying the definition to any particular construction.</p>
<div class="definition">
Given a set $S$, we say $F(S)$ is a <b>free group</b> on $S$, if there exists an embedding $S\hookrightarrow F(S)$, and given any group $G$ and set map $S\rightarrow G$, there exists a unique group homomorphism $\phi:F(S)\rightarrow G$ s.t. the following diagram commutes
<center><img src="https://nivent.github.io/images/blog/geo-group/diag.png" width="150" height="100" /></center>
where the dotted line signifies that this is the map we are claiming existence of.
</div>
<div class="exercise">
Show that our previous construction satisfies this criterion
</div>
<div class="theorem">
The above characterises the free group on $S$ uniquely up to unique isomorphism
</div>
<div class="proof4">
Let $G,H$ be two groups with embeddings $S\hookrightarrow G,H$ satisfying the above criterion. Then, because $G$ is a free group on $S$ and we have a map $S\hookrightarrow H$, this extends to a unique homomorphism $\phi:G\rightarrow H$. Simiarly, we get a unique homomorphism $\psi:H\rightarrow G$ s.t. the left diagram below commutes.
<center><img src="https://nivent.github.io/images/blog/geo-group/diag2.png" width="300" height="100" /></center>
By commutativity of the left diagram, we get that $\psi\circ\phi:G\rightarrow G$ extends the embedding $S\hookrightarrow G$, but the identity function $1_G:G\rightarrow G$ does this as well. Since such a homomorphism is unqiue, we must have $\psi\circ\phi=1_G$. Similar reasoning shows that $\phi\circ\psi=1_H$ so $\phi,\psi$ are isomorphisms.
</div>
<p>This universal property of free groups gives a natural segway into our next topic. Intuitively, a presentation of a group is a compact way of writing it down. Instead of specifying every single element and how to multiply them, you only write down some set of generators and relations (e.g. words equivalent to the identity). The notation looks like</p>
<script type="math/tex; mode=display">G\simeq\pres{\text{generators}}{\text{relations}}</script>
<p>so for example, we have $F_1\simeq\gen{a}\simeq\Z$ and $\zmod n\simeq\pres{a}{a^n}$. In order to formalize this, we make use of the following theorem.</p>
<div class="exercise">
Prove that every group is the quotient of a free group
</div>
<p>Now, given a group $G$, in order to write down a presentation for it, we first find some free group $F(S)$ and normal subgroup $K\le F(S)$ s.t. $G\simeq F(S)/K$. Then, letting $R\subset K$ be a generating set for $K$, our presentation is</p>
<script type="math/tex; mode=display">G\simeq\pres SR</script>
<p>giving a formal definition of the notation<sup id="fnref:13"><a href="#fn:13" class="footnote">15</a></sup></p>
<div class="example">
The dihedral group $D_{2n}$ (symmetries of a regular $n$-gon) has presentation $D_{2n}\simeq\pres{r,f}{r^n,frfr}$ where $r$ is rotation by $2\pi/n$ degrees and $f$ is flipping across the diagonal.
</div>
<div class="exercise">
Group presentations are not unqiue. Show that $\pres{x,y}{xyx=yxy}\simeq\pres{a,b}{a^2=b^3}$
</div>
<div class="exercise">
$\zmod 2\simeq\pres{a}{a^2}$ is a 2-element group. What is the cardinality of $G\simeq\pres{a,b}{a^2,b^2}$? Can you find a familar group that $G$ contains as a subgroup?
</div>
<h1 id="second-neat-application">Second Neat Application</h1>
<p>Now that we’re a bit more aquainted with how horrible general groups can be, let’s focus on something a bit more familiar</p>
<script type="math/tex; mode=display">\DeclareMathOperator{\SL}{SL}\SL_2(\Z)=\brackets{\left.\mat abcd\right| ad-bd=1}</script>
<p>The group of $2\times2$ matrices with integer coordinates and determinant $1$. Linear algebra is a particularly nice subject, and this is a very linear algebraic group, so it must certainly be really nice, right?</p>
<div class="theorem">
$\SL_2(\Z)$ contains $F_2$ has a subgroup
</div>
<p>Like I said, general groups are trash. This application will be similar to the last; we’ll prove a lemma that will give us a direct route to this theorem.</p>
<div class="lemma" name="Ping-Pong">
Let $G$ be a group generated by two elements $a,b$, and suppose that $G$ acts on a set $X$. If $X$ has disjoint nonempty subsets $X_a$ and $X_b$ s.t. $a^k\cdot X_b\subseteq X_a$ and $b^k\cdot X_a\subseteq X_b$ for all nonzero $k$, then $G\simeq F_2$.
</div>
<div class="proof4">
First, convince yourself that every non-empty word in $a,b$ is conjugate to one of the form $g=a^*b^*\dots b^*a^*$ where the stars are arbitrary nonzero exponents. Now, $g$ is not the identity because $g\cdot X_b\subseteq X_a$, so $G$ has no non-trivial relations and the conclusion follows.
</div>
<div class="exercise">
Show that every non-empty word in $a,b$ is conjugate to one of the form used in the proof.
</div>
<p>The above proof is a gem in and of itself, because it’s so clean. In case some clarity is lost in its brevity, the idea is that as you apply each syllable (i.e. $a^*$ or $b^*$) of $g$ to $X_b$, you keeping moving back between $X_b$ and $X_a$ (like a game of ping-pong), landing away from where you started. This simple lemma will let us prove this section’s main theorem in a rather concrete way.</p>
<div class="theorem">
Fix some integer $m\ge2$. Let
$$A =\mat 1m01\text{ and }B=\mat10m1$$
Then, $A,B$ generate a free subgroup of rank 2 in $\SL_2(\Z)$.
</div>
<div class="proof4">
It is easily verified that $A,B\in\SL_2(\Z)$ and an induction argument show that
$$A^k=\mat1{km}01\text{ and }B^k=\mat10{km}1$$
Now, note that $A,B$ have a natural action on $\R^2$ via multiplication. Consider the sets
$$X_A=\brackets{\left.\vvec xy\in\R^2\right||x|>|y|}\text{ and }X_B=\brackets{\left.\vvec xy\in\R^2\right||x|<|y|}$$
Given $v=\hvec xy^T\in X_A$ and $v'=\hvec{x'}{y'}^T\in X_B$, we have $B^kv\in X_B$ since
$$|y+kmx|\ge|kmx|-|y|=m|k||x|-|y|\ge2|x|-|y|>|x|$$
for $k\in\Z\sm\{0\}$. Simiarly, $A^kv'\in X_A$ so the ping pong lemma applies.
</div>
<h1 id="results-for-future-posts">Results for Future Posts</h1>
<p>I mentioned in the beginning that I wouldn’t be able to cover all the results I would like. In this final section, I waat to mention some of things I left out.</p>
<div class="lemma">
If a group acts freely on a tree, then it is a free group.
</div>
<p>An immediate application of this lemma<sup id="fnref:15"><a href="#fn:15" class="footnote">16</a></sup> is the following</p>
<div class="theorem" name="Nielsen-Schreier">
Any subgroup of a free group is free
</div>
<p>While this fact may seem obvious or benign, it is definitely non-trivial. As an attempt to appreciate this tree lemma, I challenge you to prove this theorem completely algebraically (Spoiler: <sup id="fnref:16"><a href="#fn:16" class="footnote">17</a></sup>).</p>
<p>Keeping in touch with the theme of free groups, another surprising result is that $\SL_2(\Z)$ actually contains many more free groups than the one I pointed out in the last section.</p>
<div class="theorem">
For all $m\ge3$, the group
$$ \SL_2(\Z)[m]:=\ker(\SL_2(\Z)\rightarrow\SL_2(\zmod m)) $$
is free.
</div>
<p>This theorem is actually also proved by exhibiting a free action of this group on a tree.</p>
<p>Moving away from free groups, there’s a notion similar to an isometry but weaker called a quasi-isometry.</p>
<div class="definition">
Let $(X,d_X)$ and $(Y,d_Y)$ be metric spaces. A function $f:X\rightarrow Y$ is called a <b>quasi-isometry</b> if there are constants $K\ge1$ and $C\ge0$ s.t.
$$\frac1Kd_X(x_1,x_2)-C\le d_Y(f(x_1),f(x_2))\le Kd_X(x_1,x_2)+C$$
for all $x_1,x_2\in X$ and there's a constant $D>0$ so that for every point $y\in Y$, there's some $x\in X$ s.t. $d_Y(f(x),y)\le D$.
</div>
<div class="theorem">
If $G$ is quasi-isometric to $\Z^n$ (both with the word metric), then $G$ contains $\Z^n$ as a finite index subgroup.
</div>
<p>This theorem is particuarly interesting because it is highly geometric (or at least, the premise is). It is unclear how to even formulate this theorem algebraically if such a thing can be done.</p>
<p>As the name of this section suggests, I may return to give actual proofs of some of these results in a future post, but for now, I think this post has gone on long enough.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>I haven’t looked into the book too deeply yet, so maybe this changes towards the end <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>Plus many exercises (don’t feel pressured to do them all) <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>This section was originally titled “Groups aren’t abstract nonsense” with the title of the post being “Geometric Group Theory”. While writing it, I decided that the concreteness of groups was a recurring-enough theme to be reflected in the title. Secretly though, groups are just groupoids (categories where every morphism is an isomorphism) with one object, so they actually are abstract nonsense. <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>This won’t come up here, but it’s worth mentioning that much of representation theory is simply the study of group actions, and specifically, groups acting on vector spaces <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>I want to make and insert an explicit example image of this, but was too lazy to do so. Hence, I encourage you to do this yourself with the group D_8 (symmetries of a square) acting on a regular octagon <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:17">
<p>Burnside was not the first to prove this lemma. He, himself, attributed it to Frobenius but even before Frobenius, it was known to Cauchy. I do not know if Cauchy was the first to prove it or not. <a href="#fnref:17" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>Honestly, half of graph theory is just setting up definitions. Also, shameless plus, I defined 2/3 of these in a <a href="../probabilstic-method">previous post</a> <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>Recall two sets are the same when there’s a bijection between them, and the symmetrices of a set are the bijections to itself <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>I don’t actually know the answer to this, but I suspect it doesn’t; at the very least, when trying to prove it without this assumption, I ran into issues showing that edge labels can’t change. If you decide to look for a (small) counterexample, I believe (but do not know for sure) than one should exist among finite groups with two or three generators <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
<li id="fn:8">
<p>Worth mentioning that we’ve just turned any group into a geometric object: a metric space <a href="#fnref:8" class="reversefootnote">↩</a></p>
</li>
<li id="fn:10">
<p>Mostly referenced without proof at the very end <a href="#fnref:10" class="reversefootnote">↩</a></p>
</li>
<li id="fn:11">
<p>The book (pg. 40) seems to define a centroid as the minimizer of the sum of distances (not squared distances) <a href="#fnref:11" class="reversefootnote">↩</a></p>
</li>
<li id="fn:14">
<p>Interestingly enough, I had the opposite experience with linear algebra. At first I thought abstract vector spaces were cool because they were so abstract and potentially wild. When I first saw that every vector space had a basis, it ruined linear algebra for me because all these strange, crazy things I had been dealing with were essentially just R^n in disguise <a href="#fnref:14" class="reversefootnote">↩</a></p>
</li>
<li id="fn:12">
<p>i.e. groups you may not see in your first exposure to group theory. <a href="#fnref:12" class="reversefootnote">↩</a></p>
</li>
<li id="fn:13">
<p>To get a group from it’s presentation, just take the free group on its generators and quotient out by the smallest normal subgroup containing all the relations <a href="#fnref:13" class="reversefootnote">↩</a></p>
</li>
<li id="fn:15">
<p>Maybe plus the fact that free groups have (infinite) trees for Cayley graphs <a href="#fnref:15" class="reversefootnote">↩</a></p>
</li>
<li id="fn:16">
<p>There are ways other ways to prove this, but all the ones I know are geometric <a href="#fnref:16" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>I’ve recently been skimming through this book called “Office Hours with a Geometric Group Theorist” which, perhaps unsurprisingly, is about using geometric objects to study groups. It mostly focuses on how group actions on graphs and metric spaces can reveal information about the group1, and contains some pretty nice results. Unfortunately, I have too many in mind for one post, but I would still like to introduce the basic notions of the subject and a few results I enjoyed. I imagine this will be a lengthy post with a mix of introducing theory and neat applications2. I haven’t looked into the book too deeply yet, so maybe this changes towards the end ↩ Plus many exercises (don’t feel pressured to do them all) ↩Difference of squares2017-12-17T21:55:00+00:002017-12-17T21:55:00+00:00https://nivent.github.io/blog/difference-squares<p>Two new posts in one day? It must be Christmas. I think this post will be relatively short. I want to talk about a problem that popped in my head while I was working on the last post, and then mention some thoughts that this problem sparked which I hope are worth writing down before I forget.</p>
<h1 id="which-numbers-can-be-written-as-the-difference-of-two-squares">Which numbers can be written as the difference of two squares?</h1>
<p>Let’s just jump right into things. One natural place to start tackling this questions is with the primes. With that said, let $p$ be a prime number and suppose we can write $p=x^2-y^2$ for some $x,y\in\Z_{\ge0}$. This gives<sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup></p>
<script type="math/tex; mode=display">p=x^2-y^2=(x-y)(x+y)\implies p=x+y\text{ and }x-y=1</script>
<p>which means we require that $p$ is the sum of two consecutive numbers! Now, this took me longer than I’d like to admit to realize while I was working on this, but this is equivalent to saying that $p$ is odd. In other words, all primes $p\neq2$ can be written as the difference of two squares, namely $p=\lceil p/2\rceil^2-\lfloor p/2\rfloor^2$.</p>
<p>Since we’ve completely characterized which primes are differences of squares, we really hope that the product of differences of squares is also a difference of squares. I claim that a natural way of reaching this conclusion is to make use of the ring $\Z[\eps]\simeq\Z[x]/(x^2-1)$ where $\eps^2=1$. Using this ring let’s us factor $x^2-y^2=(x+\eps y)(x-\eps y)$, and while we could factor things before, this factorization is more useful since we can easily calculate</p>
<script type="math/tex; mode=display">(a+b\eps)(c+d\eps)=(ac+bd)+\eps(ad+bc)</script>
<p>which is enough to see that $(a^2-b^2)(c^2-d^2)=(ac+bd)^2-(ad+bd)^2$ so being a difference of squares is preserved by multiplication. Since all odd primes are differences of squares, this get’s us that all odd numbers are differences of squares<sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>.</p>
<p>Now, let $z=n^2m$ where $m=x^2-y^2$. Then, $z=n^2(x^2-y^2)=(nx)^2-(ny)^2$ is also a difference of squares. Hence, any number that $2$ divides an even number of times is a difference of squares. At this point, I was tempted to think that I was done, but then I realized that $8=3^2-1^2$ is also a difference of squares. Since $4=2^2-0^2$, we know that $2^2$ and $2^3$ are differences of squares, so $2^{2a+3b}$ is also a difference of squares where $a,b\in\Z_{\ge0}$. It’s not hard to see that every integer can be written as $2a+3b$ execept for 1 <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup>, so $2^1=2$ is the only power of two that cannot be written as a difference of squares.</p>
<p>Since every odd prime is a difference of squares, and all but one power of $2$ is also a difference of squares, we’ve shown that the only number that might not be difference of squares are those that $2$ divides exactly once. Another way of characterizing these “bad” numbers is that they are the $n\in\Z$ for which $n\equiv2\pmod4$. Now, the equation $x^2-y^2\equiv2\pmod4$ has no solutions as one can verify by checking all 16 possible assignments of $x,y$. Thus, we’ve completely characterized the numbers that can be written as differences of squares; they are all the integers that are not $2\pmod4$! This is a surprisingly simple and nice outcome if you ask me.</p>
<h1 id="thoughts">Thoughts<sup id="fnref:5"><a href="#fn:5" class="footnote">4</a></sup></h1>
<p>With that question resolved, I wanna mention some thoughts it motivated. In the process of answering the question, it was helpful to consider the ring $\Z[\eps]$ which morally felt like the ring of integers of the number field $K=\Q(\eps)$ <sup id="fnref:4"><a href="#fn:4" class="footnote">5</a></sup>. However, this is technically wrong since $K$ isn’t a number field; it’s not a field at all (or even just a domain since $(1-\eps)(1+\eps)=0$). Despite this, we still have a natural notion of a “relative norm” on $K$ over $\Q$ given by $\knorm(x+y\eps)=x^2-y^2$, which made me wonder how much of algebraic number theory can be recovered if we study ring extensions of $\Q$ like this one <sup id="fnref:7"><a href="#fn:7" class="footnote">6</a></sup>.</p>
<p>Taking a step back to a slightly more general setting, my curiosity shifted away from number theory specifically to wonder what happens if you do Galois theory in a more general setting like this. The “ring extension” $K/\Q$ morally feels like degree 2 Galois extension with non-trivial autmorphism given by $\sigma(x+y\eps)=\sigma(x-y\eps)$. After having this in the back of my mind all day, this is what I’ve discovered so far as the possible beginnings of a formalism…</p>
<p>Fix some field $\F$. We want to study (commutative) rings <sup id="fnref:6"><a href="#fn:6" class="footnote">7</a></sup> containing this field, so let $R$ be such a ring. $R$ is still an $\F$-vector space, so we can still define the degree of the extension $R/\F$ as $[R:\F]:=\deg_{\F}R$. However, if we think on this more, it might make more sense to think of $R$ less as some sort of extension ring, and more as an $\F$-algebra. I don’t know how beneficial this algebra viewpoint is compared to thinking in terms of ring extensions, but it does at least suggest that the true object of interest of this hypothetical generalized Galois theory should be $R$-algebras where we require that $R$ is a $k$-algebra for some field $k$ (or equivalently, that $R$ is a vector space).</p>
<p>The first major issue I see with recovering Galois theory in this setting is the behavior of towers of extensions. Classically, if we have $L/K/F$ a tower of field extensions, then we get that $[L:F]=[L:K][K:F]$ and this allows one to perform induction arguments <sup id="fnref:8"><a href="#fn:8" class="footnote">8</a></sup>. This fact basically follows from the niceness of vector spaces, but since generally for a ring $R$, non-free $R$-modules exist, we face some issues with studying towers of ring extensions. It’s possible that $R$ being a $k$-algebra (for $k$ a field) is a strong enough restriction to force all $R$-algebras to be free $R$-modules, but I don’t know enough algebra to think of a proof or counterexample to that claim off the top of my head (although it’s almost certainly false), so this issue remains unresolved. Despite this, if one could find away to get around this issue of towers of extensions, then<sup id="fnref:9"><a href="#fn:9" class="footnote">9</a></sup> I think you can manage to recover at least a few gems from Galois theory. I’m a hopeful enough person to think that this might be possible in some nice settings, so</p>
<blockquote>
<p>Conjecture<br />
Let $\F$ be a field, and fix some $f(x)\in\F[x]$. Let $R$ be the “splitting ring” of $f(x)$ <sup id="fnref:10"><a href="#fn:10" class="footnote">10</a></sup>. Then, the number of automorphisms $\sigma:R\rightarrow R$ fixing $\F$ is at most $\deg_{\F}R$.</p>
</blockquote>
<p>I obviously don’t know if this conjecture is true, but I feel that something like it should be true. I suspect I won’t do a lot of thinking about this anytime soon, so I leave this here to one day return to it and continue my thoughts.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>It’s technically also possible that x-y=-1, but without loss of generality, assume x>y <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>This didn’t hit me until after I’d done all the work that went into this post, but the difference between the nth square and the (n+1)th square is the nth odd number (i.e. (n+1)^2-n^2=2n+1) so this conclusion is trivial <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>If n=2k is even, then take a=k and b=0. If n=2k+1 is odd, then take a=k-1 and b=1 (this fails only if k=0, i.e. if n=1) <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>of a Programmer <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>I’m pretty sure this notation technically makes no sense, but by it, I mean the ring Q(e)={a+be:a,b are fractions} where e^2=1 <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>One immediate question I haven’t thought about the answer to is “does the traditional definition of the ring of integers still work?” In this example, one would expect that the integer ring of Q(e) should be Z[e], but I haven’t varified that this is the integral closure of Z <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>I’m hesistant to require commutativty because the quaternions also feel morally like an extension that should be considered in this more general theory, albeit not a Galois one <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
<li id="fn:8">
<p>If you’re studying an extenion K/F, you might start by picking a in K-F so you get the tower K/F(a)/F. Prove your statement for F(a)/F, get the same thing for K/F(a) by induction of degrees and then use these to together to conclude something about K/F <a href="#fnref:8" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>Don’t quote me on this; I haven’t thought about it too deeply <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
<li id="fn:10">
<p>i.e. the smallest ring (containing F) in which f splits <a href="#fnref:10" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>Two new posts in one day? It must be Christmas. I think this post will be relatively short. I want to talk about a problem that popped in my head while I was working on the last post, and then mention some thoughts that this problem sparked which I hope are worth writing down before I forget.An interesting equation2017-12-17T20:39:00+00:002017-12-17T20:39:00+00:00https://nivent.github.io/blog/interesting-equation<p>One day I will return to writing posts that are not always very algebraic in nature, but this is not that day. I want to talk today about an example of a peculiar equation, but first a little background… In my mind, number theory (at least on the algebraic side) is ultimately about solving diophantine equations and not much more. This is what originally got me interested in the subject because trying to solve these equations can often feel like some sort of puzzle or exploratory game; there’s a common set of tricks one can apply, but not much of a single path or algorithm that always gets you the solution. Among the most basic/fundamental of tricks is to use congruences. If you seek (integer) solutions to $x^2+y^2=3$, then a natural thing to do is consider this equation (mod $4$) and note that there are no solutions to $x^2+y^2\equiv3\pmod4$ <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>, so there are no integer solutions to the original equation; easy. In fact, you can state this principal in general.</p>
<blockquote>
<p>Fact<br />
Let $p(x,y)$ be some polynomial with integer coefficients. If $p(x,y)\equiv0\pmod m$ has no solutions for some $m$, then $p(x,y)$ has no integer solutions.</p>
</blockquote>
<p>It’s not far fetched to imagine that this is the only thing preventing polynomials from having integer solutions. That is, it’s natural to consider if any polynomial that can be solved$\pmod m$ for all $m\in\Z_{>0}$ must be solvable by integers. However, it turns out that this is not the case, and the subject of this post is a single counterexample</p>
<script type="math/tex; mode=display">\begin{align*}
x^2-82y^2=2
\end{align*}</script>
<p>For conveince, let’s let $q(x,y)=x^2-82y^2-2$. I haven’t thought too much about this, but for understanding this post, probably <a href="../number-theory">these</a> <a href="../solving-pell">two</a> posts on number theory, and some <a href="../ring-intro">ring theory</a> should be sufficient; if you don’t feel like reading those, then just stick with this post and if anything doesn’t make sense, you can refer back to those posts to figure it out or leave a comment, asking a question.</p>
<h1 id="solutions-in-q-and-zmod-m">Solutions in $\Q$ and $\zmod m$</h1>
<p>We’ll first establish that this equation has solutions both in $\Q$ and in $\zmod m$ for all $m$. For $\Q$, we’ll actually do one better and show that it has infinitely many solutions. We will first search for a single rational solution, so let $x=a/b$ and $y=c/d$ where $a,b,c,d\in\Z$. In order to keep things simple, we’ll assume $b=d$ so we can rewrite our equation as</p>
<script type="math/tex; mode=display">\begin{align*}
x^2-82y^2=2\iff(a/b)^2-82(c/d)^2=2\iff a^2-82c^2=2b^2\iff a^2=2b^2+82c^2
\end{align*}</script>
<p>This suggests one way of finding a solution. We just need to search for integers $b,c$ such that $2b^2+82c^2$ is a perfect square. If you were to try some examples by hand or write a computer program to search, you’d eventually come across $2(3)^2+82(1)^2=10^2$ which gives $(x,y)=(10/3,1/3)$ as a solution to $q(x,y)\in\Q[x,y]$. This is one solution, but far from the only one.</p>
<blockquote>
<p>Exercise<br />
Show that $q(x,y)\in\Q[x,y]$ has infinitely many rational solutions <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>.</p>
</blockquote>
<p>To show that this equation has solutions in $\zmod m$, it’ll be sufficient to show that it can be solved for $m$ a prime power. This is a consequence of the <a href="https://www.wikiwand.com/en/Chinese_remainder_theorem">Chinese Remainder Theorem</a>.</p>
<blockquote>
<p>Theorem<br />
Let $p\neq3$ be a prime. Then, $q(x,y)$ has a solution viewed as a polynomial $q(x,y)\in\zmod{p^r}[x,y]$ for all $r$. That is, there exists integers $a,b$ s.t. <center>$q(a,b)=a^2-82b^2\equiv2\pmod{p^r}$</center></p>
</blockquote>
<div class="proof2">
Pf: Fix any positive integer $r$, and note that $\gcd(3,p^r)=\gcd(3,p)=1$, which means $3$ is a unit in $\zmod{p^r}$. Fix some $b$ s.t. $3b\equiv1\pmod{p^r}$ and note that $(x,y)=(10b,b)$ is a solution to $q(x,y)\in\zmod{p^r}[x,y]$ as <center>$$100b^2-82b^2=18b^2\equiv2(3b)(3b)\equiv2\pmod{p^r}$$</center>$\square$
</div>
<p>This just leaves the case of powers of $3$. This isn’t actually a special case, and can be handled much in the same way as all the others. We begin by noting that $(66/13,-7/13)$ is a rational solution to $q(x,y)$. Since $3$ does not divide $13$, we say that they are coprime, and so $13$ is a unit in $\zmod{3^r}$ for all $r$. Hence, $(66/13,-7/13)$ still makes sense as a solution in $\zmod{3^r}$. This will give away one solution to the exercise, but in case you’re curious where this point came from <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup>.</p>
<p>Thus, we’ve shown $q(x,y)$ has solutions (mod $p^r$) for all prime powers $p^r$, and hence it has solutions (mod $m$) for all integers $m$.</p>
<h1 id="chinese-remainder-theorem">Chinese Remainder Theorem</h1>
<p>This section can be skipped, but I wanted to give a statement and proof of CRT for completeness.</p>
<blockquote>
<p>Chinese Remainder Theorem<br />
Let $R$ be a ring, and $I_1,\dots,I_n$ be a collection of pairwise coprime two-sided ideals (i.e. $I_i+I_j=R$ for all $i\neq j$). Then, we have a ring isomorphism</p>
</blockquote>
<center>$$\Large\begin{matrix}
\frac R{I_1\cap I_2\cap\dots\cap I_n} &\longrightarrow& \frac R{I_1}\oplus\frac R{I_2}\oplus\dots\oplus\frac R{I_n}\\
r+I_1\cap I_2\cap\dots\cap I_n &\longmapsto& \left(r+I_1,r+I_2,\dots,r+I_n\right)
\end{matrix}$$</center>
<div class="proof2">
Pf: We will prove this by induction on $n$, starting with the case of two ideals and the map $\phi:(r+I_1\cap I_2)\mapsto(r+I_1,r+I_2)$. We first need to confirm that this map is well-defined. Pick some $r,s\in R$ in the same coset so $r-s\in I_1\cap I_2$. Practically by definition, this means that $r+I_1=s+I_1$ and $r+I_2=s+I_2$ so $\phi$ is well-defined. From the behavior of cosets, it's clear that $\phi$ is a homomorphism so we only need to verify injectivity and surjectivity. Now, pick some $r+I_1\cap I_2\in\ker\phi$ so $r\in I_1\cap I_2$. Then, $\phi(r)=(r+I_1,r+I_2)=(I_1,I_2)$ is the identity so $\phi$ has trivial kernel which only leaves surjectivity. Fix some $x\in I_1$ and $y\in I_2$ such that $x+y=1$ which we get from coprimality. Then, $(r+I_1,s+I_2)$ has $sx+ry$ as a preimage, so $\phi$ is surjective. Now that we've established the case of two ideals, the general case will follow if we can show that $I_1$ and $I_2\cap\dots\cap I_n$ are coprime. To this end, for each $2\le j\le n$ pick $x_j\in I_1$ and $y_j\in I_j$ s.t. $x_j+y_j=1$. Then, <center>$$1=(x_2+y_2)(x_3+y_3)\dots(x_n+y_n)=\sum_{S\subseteq\{2,\dots,n\}}\left(\prod_{i\in S}x_i\right)\left(\prod_{j\not\in S}y_j\right)=X+\left(\prod_{j=2}^ny_j\right)$$</center>
where $X\in I_1$ is some linear combination of terms that contain $x_i$ for some $i\in\{2,\dots,k\}$ and $\prod y_j\in I_2\cap\dots\cap I_n$. Thus, $1\in I_1+(I_2\cap\dots\cap I_n)$ so $I_1+(I_2\cap\dots\cap I_n)=R$ which means these ideals are coprime as claimed. $\square$
</div>
<blockquote>
<p>Corollary<br />
Fix an integer $m$ and factor it as $m=p_1^{r_1}p_2^{r_2}\dots p_n^{r_n}$ where each $p_i$ is a different prime. Then, <center>$\zmod m\simeq\zmod{p_1^{r_1}}\oplus\zmod{p_2^{r_2}}\oplus\dots\oplus\zmod{p_n^{r_n}}$</center></p>
</blockquote>
<div class="proof2">
Pf: Exercise to reader
</div>
<p>For our purposes, we only need the corollary and not the full CRT. We want to confirm that $x^2-82y^2=2$ has solutions (mod $m$) for all $m$. Well, given any $m$, we factor it into prime powers to see that $\zmod m\simeq\zmod{p_1^{r_1}}\oplus\zmod{p_2^{r_2}}\oplus\dots\oplus\zmod{p_n^{r_n}}$. In the previous section, we found solutions in each of these factors so let $(x_j,y_j)$ satisfy $x_j^2-82y_j^2\equiv2\pmod{p_j^{r_j}}$. Then, CRT guarantees the existence of some <script type="math/tex">x^*,y^*\in\zmod m</script> such that <script type="math/tex">x^*\equiv x_j\pmod{p_j^{r_j}}</script> and <script type="math/tex">y^*\equiv y_j\pmod{p_j^{r_j}}</script> for all $j$. Thus, <script type="math/tex">(x^*)^2-82(y^*)^2\equiv2\pmod{p_j^{r_j}}</script> for all $j$. Since <script type="math/tex">(x^*,y^*)</script> satisfy $q(x,y)$ in each factor of $\zmod m$ (i.e. in each $\zmod{p_j^{r_j}}$), they must satisfy it in $\zmod m$ itself, so $q(x,y)$ does indeed have solutions modulo any integer.</p>
<h1 id="no-solutions-in-z">No Solutions in $\Z$</h1>
<p>To finish things off, we’ll show that there are no integer solutions to $x^2-82y^2=2$. This section will use some of the ideas previously touched upon in my <a href="../solving-pell">pell’s post</a>. Our first observation is that the right setting to analyze this equation is in $\zadjs{82}$, which, using terminology from that pell’s post, is the ring of integers for $K=\qadjs{82}$. We see that solutions to this equation correspond exactly to elements of $\zadjs{82}$ with norm $2$. As it turns out, understanding which numbers have norm $2$ is related to understanding how $2$ factors in $\ints K=\zadjs{82}$. More specifically, we wish to factor $(2)$ into prime ideals:</p>
<script type="math/tex; mode=display">(2)=(2,\sqrt{82})^2</script>
<p>This equality is easily verified as $(2,\sqrt{82})^2=(4,2\sqrt{82},82)=(2)(2,\sqrt{82},41)=(2)$ since $(2,\sqrt{82},41)=(1)$ as $41-2(20)=1$. Now, suppose $z=x+y\sqrt{82}$ ($x,y\in\Z$) has norm 2. i.e. assume that $=x^2-82y^2=2$. It is a fact that I will not prove that this is possible only if $(x+y\sqrt{82})=(2,\sqrt{82})$. Given this, we see that $(z^2)=(z)^2=(2)$ so $z^2=2u$ for some unit $u\in\ints K^\times$. Taking norms of both sides gives</p>
<script type="math/tex; mode=display">4\knorm(u)=\knorm(2)\knorm(u)=\knorm(2u)=\knorm(z^2)=\knorm(z)^2=4\implies\knorm(u)=1</script>
<p>Now, note that $\zadjs{82}$ has fundamental unit <script type="math/tex">\eps=9+\sqrt{82}</script> and that <script type="math/tex">\knorm(\eps)=-1</script>. Since every unit is $\pm$ a power of $\eps$, this means we can write $u=\pm\eps^{2k}$ for some $k$. Thus, we can rewrite $z^2=2u$ as $(\eps^{-k}z)^2=\pm2$. To finish things off, we will show that neither of $\pm2$ is a square in $\ints K$, giving a contradiction. This is easily seen by observing that given any $a,b\in\Z$, we have</p>
<script type="math/tex; mode=display">(a+b\sqrt{82})^2=(a^2+82b^2)+2ab\sqrt{82}</script>
<p>This can’t be $-2$ because the non-$\sqrt{82}$ part is always positive, and it can’t be $+2$ since that would require $b=0$ and $2$ is not a square in the normal integers. Thus, $\pm2$ are not squares in $\ints K$ so there’s no element of norm $2$ which means that $x^2-82y^2=2$ has no integer solutions.</p>
<h1 id="further-work">Further Work</h1>
<p>So we’ve shown that $x^2-82y^2=2$ has infinitely many rational solutions, and solutions in $\zmod m$ for all $m$, but no integer solutions. This means congruential obstructions are not the only things that can prevent a polynomial from being solved in the integers. We might still be interested in asking questions about better understanding congruential obstructions though. For example, in our analysis of this equation, the fact that we have solutions in $\zmod m$ for all $m$ was very related to the fact that we had (infinitely) many rational solutions, which begs the question</p>
<blockquote>
<p>Conjecture<br />
Let $p$ be a polynomial with integer coefficients. Then, $p$ has solutions in $\zmod m$ for all $m\iff p$ has infinitely many rational solutions.</p>
</blockquote>
<p>It actually turns out that this conjecture is false, and one counterexample is the polynomial</p>
<script type="math/tex; mode=display">p(x) = (x^2-2)(x^2-17)(x^2-34)</script>
<p>This has solutions (mod $m$) for all $m$ <sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup>, but there are visibly no rational solutions. This breaks one direction of the iff above, but it’s still possible that the other direction holds, and I encourage you to investigate this.</p>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Just try all 16 possible pairs of values for x,y <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p><a href="../number-theory">hint</a> <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>I used the (10/3,1/3) solution to project the line y=0 onto the curve defined by x^2-82y^2=2. Under this projection/correspondence, the point (4,0) on this line gets mapped to (66/13,-7/13) on this curve <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>The easiest way to convince yourself of this claim is via <a href="https://www.wikiwand.com/en/Quadratic_reciprocity">quadratic reciprocity</a> <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>One day I will return to writing posts that are not always very algebraic in nature, but this is not that day. I want to talk today about an example of a peculiar equation, but first a little background… In my mind, number theory (at least on the algebraic side) is ultimately about solving diophantine equations and not much more. This is what originally got me interested in the subject because trying to solve these equations can often feel like some sort of puzzle or exploratory game; there’s a common set of tricks one can apply, but not much of a single path or algorithm that always gets you the solution. Among the most basic/fundamental of tricks is to use congruences. If you seek (integer) solutions to $x^2+y^2=3$, then a natural thing to do is consider this equation (mod $4$) and note that there are no solutions to $x^2+y^2\equiv3\pmod4$ 1, so there are no integer solutions to the original equation; easy. In fact, you can state this principal in general. Just try all 16 possible pairs of values for x,y ↩Algebra Part II2017-11-21T03:00:00+00:002017-11-21T03:00:00+00:00https://nivent.github.io/blog/ring-intro<p>This is the second post in a series that serves as an introduction to abstract algebra. In the <a href="../group-intro">last one</a>, we defined groups and certain sets endowed with a well-behaved operation. Here, we’ll look at rings which are what you get when your set has two operations defined on it, and we’ll see that much of the theory for groups has a natural analogue in ring theory.</p>
<blockquote>
<p>Bit of a Disclaimer<br />
I can’t possibly mention everything on a particular subject in one post, and I am not a particular fan of writing insanely long posts, so some things have to be cut. In particular, I aim to introduce most of the important topics in each subject without necessarily doing a deep dive, and while I will try to mention specific examples of things, I won’t spend too much time looking at them closely. It will be up to you to take the time to make sure the example makes sense. Because of this, I’ll try to include exercises that should be good checks of understanding. Finally, as always, things are presented according to my tastes and according to whatever order they happen to pop into my head; hence, they are not necessarily done the usual way.</p>
</blockquote>
<h2 id="rings-and-better-rings">Rings and Better Rings</h2>
<p>If you’ve heard of groups, I doubt I need to motivate rings much. Things like the integers, real numbers, matrices, etc. all form groups but when considering them as such, you have to make a choice about whether you want to consider they’re additive structure, or their multiplicative structure; why not look at both?</p>
<blockquote>
<p>Definition<br />
A <strong>ring</strong> $(R, +, \cdot)$ is a set $R$ together with two operations $+:R\times R\rightarrow R$ and $\cdot:R\times R\rightarrow R$ satisfying the following for all $a,b,c\in R$<br />
<script type="math/tex">% <![CDATA[
\begin{align*}
&\bullet (R, +)\text{ is an abelian group with additive identity }0\\
&\bullet a\cdot(b\cdot c) = (a\cdot b)\cdot c\\
&\bullet a\cdot(b+c) = a\cdot b+a\cdot c\text{ and }(a+b)\cdot c=a\cdot c+b\cdot c
\end{align*} %]]></script><br />
If additionally, $a\cdot b=b\cdot a$ always, then we call this a <strong>commutative ring</strong></p>
</blockquote>
<p>There are a few things worth noticing about the definition of a ring. First of all, it’s kinda short; at least, it was shorter than I expected the first time I saw it. There are like four different properties you need to satisfy to be an abelian group; to be a ring, you just need associative multiplication and (both left and right) distributivity. You don’t need inverses, and you don’t even need to have a multiplicative identity. This means you can get some weird stuff happening in general rings.</p>
<ul>
<li>In $2\Z$, you have $ab\neq a$ no matter which $a,b\in2\Z$ you choose</li>
<li>In $\zmod8$, you get $4*2=0$ even though $4,2\neq0$</li>
<li>Also in $\zmod8$, you get $7^2=1^2=3^3=5^2=1$ so you have 4 different square roots of $1$</li>
</ul>
<p>Because of this, we will see a number of different types of rings with increasingly more conditions on them, guaranteeing nice behavior. Also, in case you were wondering by require an abelian group under addition and not just a group, it’s because general groups and noncommutative rings are ugly enough separately, having one object that doesn’t commute under addition or multiplication just sounds awful <sup id="fnref:1"><a href="#fn:1" class="footnote">1</a></sup>. Now, leet’s see some additional conditions we may want to place on rings</p>
<blockquote>
<p>Definition<br />
A ring $R$ is said to <strong>have unity</strong> (alternatively, $R$ is <strong>unital</strong>) if there exists some $1\in R$ such that $a\cdot1=1\cdot a=a$ for all $a\in R$.</p>
</blockquote>
<p>This gets rid of the first problematic ring I mentioned above, but not the other two. The third one may stick around for awhile, but the second we really don’t like.</p>
<blockquote>
<p>Definition<br />
An element $a\in R$ is called a <strong>zero divisor</strong> if there exists some nonzero $b\in R$ such that $ab=0$</p>
</blockquote>
<blockquote>
<p>Definition<br />
An <strong>integral domain</strong> $D$ is a commutative ring with unity and no zero divisors</p>
</blockquote>
<blockquote>
<p>Question<br />
Is the zero ring an integral domain?</p>
</blockquote>
<p>Now that’s something we can work with. It practive, almost all rings you work with will have unity, the majority will be commutative <sup id="fnref:2"><a href="#fn:2" class="footnote">2</a></sup>, and plenty of them will be integral domains. The fact that integral domains don’t have zero divisors means that we can “cancel” multiplication. Normally when we have some equation like $ab=ac$, we cancel out the $a$’s and conclude that $b=c$. However, as the $4\cdot2=0$ example from $\zmod8$ shows, this isn’t always legitimate. In high-school algebra, we justify this cancellation by
saying we multiply both sides by $a^{-1}$, but $a^{-1}$ won’t exist most of the time in general! Luckily, even if it doesn’t we can still justify cancellation most of the time.</p>
<blockquote>
<p>Theorem<br />
Let $D$ be a ring. Then left (respectively, right) cancellation holds iff $D$ has no zero divisors</p>
</blockquote>
<div class="proof2">
Pf: $(\rightarrow)$ Assume left cancellation holds. Pick $a,b\in D$ such that $ab=0=a0$. By left cancellation, this means $b=0$ so there are no zero divisors.<br />
($\leftarrow$) Assume $D$ has no zero divisors. Pick $a,b,c\in D$ with $a$ nonzero such that $ab=ac$. Then, we can subtract to get $0=ac-ab=a(c-b)$. Since there are no zero divisors, either $a=0$ or $c-b=0$, but $a\neq0$ by assumption so $c-b=0$ and $c=b$. $\square$
</div>
<p>Above you’ll notice that we used $a0=0=0a$ for all $a$ in a ring. This is nonobvious, but also not all that profound. You can prove basic properties of rings like this <sup id="fnref:3"><a href="#fn:3" class="footnote">3</a></sup> without much effort, so I’ll omit them. Furthermore, as with groups, we can define <strong>ring homomorphisms (or ring maps)</strong> as maps $f:R\rightarrow S$ such that $f(a+b)=f(a)+f(b)$ and $f(ab)=f(a)f(b)$; additionally, if $R,S$ both have unity we require $f(1_R)=1_S$. We can also define the <strong>kernal</strong> of a ring map $f:R\rightarrow S$ to be the subset of $R$ mapping into $0$. There’s also a notion of subring that’s exactly what you think it is.</p>
<p>For the rest of this post, I think we’ll be looking (almost) exclusively at commutive rings with unity, so unless otherwise specified, assume that’s the case. Now, before moving onto more definitions and whatnot, I want to make note of one of the most important classes of rings: polynomial rings.</p>
<blockquote>
<p>Definition<br />
Given a ring<sup id="fnref:4"><a href="#fn:4" class="footnote">4</a></sup> $R$, the <strong>polynomial ring</strong> $R[x]$ is the ring of (formal) polynomials in $x$ with coefficients in $R$.</p>
</blockquote>
<p>The above definition isn’t all that formal, but the idea is that you have things like $3x^2+2x-7\in\Z[x]$, $\pi x^4-e\in\R[x]$, $3x-1/2\in\Q[x]$, etc. One thing to be careful about is that two polynomials are equal iff they are identically the same; any polynomial $p(x)\in R[x]$ gives write to a function <script type="math/tex">R\rightarrow R</script> via evaluation, but the mapping $p\mapsto[r\mapsto p(r)]$ <sup id="fnref:6"><a href="#fn:6" class="footnote">5</a></sup> is not necessarily injective! That is, you can have distinct polynomials that determine the same function such as $p(x)=x^2$ and $q(x)=x$ in $\zmod2[x]$; $p(x)\neq q(x)$ as polynomials even though $\forall n\in\zmod2:p(n)=q(n)$.</p>
<blockquote>
<p>Aside<br />
If you wan’t to careful define the polynomial ring, you can define it as the subset of the ring $R^{\N}$ of functions from $\N$ to $R$ consisting of elements that evaluate to 0 on all but finitely many $n\in\N$. You also have to specify what multiplication looks like because it’s not the usual componentwise product.</p>
</blockquote>
<h1 id="domains">Domains</h1>
<p>Unfortunately, unlike the previous post on groups, there isn’t some major result like Langrange’s Theorem of the First Isomorphism Theorem that we’re working towards here; this post is more a goaless overview of some basics in ring theory.</p>
<blockquote>
<p>Proposition<br />
If $D$ is an integral domain, then so is $D[x]$</p>
</blockquote>
<p>Before proving this, we’ll define the degree of a polynomial which is a notion we’ll see more than once.</p>
<blockquote>
<p>Definition<br />
Given some polynomial $p(x)=\sum_{k=0}^na_kx^k\in R[x]$, its <strong>degree</strong> $\deg p(x)$ is the largest $k$ such that $a_k\neq0$. Note that $\deg0$ is undefined whereas $\deg r=0$ for any nonzero $r\in R$.</p>
</blockquote>
<blockquote>
<p>Remark<br />
Given nonzero $p,q\in D[x]$ where $D$ is an integral domain, we have $\deg(pq)=\deg p+\deg q$ which is simple but important.</p>
</blockquote>
<p>The above remark is actually strong enough to imply to proposition, so we omit a formal proof of that. In the remark, we require $D$ to be an integral domain so that the leading coefficient <sup id="fnref:5"><a href="#fn:5" class="footnote">6</a></sup> of $pq$ is guaranteed to be nonzero since it’s the product of two nonzero things (i.e. the leading coefficients of $p$ and $q$). I don’t have a good transition here, but another important thing related to integral domains is…</p>
<blockquote>
<p>Definition<br />
Given a (possibly non-commutative) ring $R$ with unity, there is a unique ring map $\Z\rightarrow R$ (why?). If $D$ is a domain, then we define the <strong>characteristic</strong> $\Char D$ of $D$ to be the least positive $n\in\Z$ mapping to 0 under this unique ring map. If no positive integer maps to 0, then we say $\Char D=0$.</p>
</blockquote>
<p>There are a few ways to think of characteristic. In a while when we define ideals, it’ll become clear that the characteristic of $D$ is the generator of $\ker(\Z\rightarrow D)$; we can also say that $\Char D$ is the (least) number of times you can add 1 to itself in a ring before getting $0$ <sup id="fnref:7"><a href="#fn:7" class="footnote">7</a></sup>; alternatively, remembering that rings are abelian groups, $\Char D$ is the additive order of $1$. Good examples to keep in mind here are $\Char\Z=0$ and $\Char\zmod p=p$. Vaguely put, characteristic is a good indicator for the behavior of a ring; weird things can happen in rings of low characteristics.</p>
<blockquote>
<p>Theorem<br />
Given an integral domain $D$ of nonzero characteristic, $\Char D$ is prime</p>
</blockquote>
<div class="proof2">
Pf: Let $D$ be an integral domain and assume $\Char D=n\neq0$. Now, write $n=ab$ ($a,b$ both positive) and let $f:\Z\rightarrow D$ be the ring map. We have $0=f(n)=f(ab)=f(a)f(b)$ but $D$ has no zero divisors so $f(a)=0$ or $f(b)=0$. Assume WLOG that $f(a)=0$. Since $a\le n$ and $n$ is the minimal integer with $f(n)=0$, we conclude that $a=n$ which means $b=1$. Thus, the only divisors of $n$ are $1,n$ so $n$ is prime. $\square$
</div>
<blockquote>
<p>Corollary<br />
$\zmod n$ is an integral domain implies that $n$ is prime</p>
</blockquote>
<p>The converse of that corollary is true as well, and proving both directions is left as an exercise. Let’s shift gears a little, and instead of talking about properties of rings, we’ll look at some specific types of elements.</p>
<blockquote>
<p>Definitions<br />
Let $R$ be a ring. An element $r\in R$ is called a <strong>unit</strong> if it divides 1. That is, there exists some $s\in R$ such that $rs=1$. We say a non-zero non-unit $r\in R$ is <strong>irreducible</strong> if whenever we write $r=ab$, it must be the case that either $a$ or $b$ is a unit. Finally, a non-zero non-unit $r\in R$ is <strong>prime</strong> if $r\mid ab$ implies that $r\mid a$ or $r\mid b$.</p>
</blockquote>
<p>In the integers $\Z$, the only units are $\pm1$, and prime and irreducible mean the same thing. In general, prime implies irreducible.</p>
<blockquote>
<p>Theorem<br />
Let $D$ be an integral domain. Then, every prime element is irreducible</p>
</blockquote>
<div class="proof2">
Pf: Pick some prime $p\in D$ and write $p=ab$. Then, $p\mid ab$ so $p\mid a$ or $p\mid b$. Assume WLOG that $p\mid a$ and write $a=pc$. Substituting into the first equation, we have $p=pcb$ which means $1=cb$ and so $b$ was a unit. $\square$
</div>
<p>The converse of this theorem does not hold in general though. For a counter example, we can consider the ring $\Z[\sqrt{-5}]=\{a+b\sqrt{-5}:a,b\in\Z\}$. Here, $2$ is irreducible as can easily be proven using the norm map <sup id="fnref:8"><a href="#fn:8" class="footnote">8</a></sup>. However, $2$ divides $6=(1+\sqrt{-5})(1-\sqrt{-5})$ but divides neither of $1\pm\sqrt{-5}$ since, for example, any multiple of $2$ will have both components even. Since the two definitions coincided for integers, we’d like to study other rings where they are the same as well. To that end, we define the following</p>
<blockquote>
<p>Definition<br />
A <strong>unique factorization domain</strong> (or UFD) $U$ is an integral domain where every non-zero $x\in U$ can be written as a product $x=up_1p_2\dots p_n$ of a unit $u$ with irreducibles $p_i$. Furthermore, this representation is unique in the sense that given $x=wq_1q_2\dots q_m$ as well, we must have $m=n$ and (after rearrangement) $q_i=v_ip_i$ for units $v_i$.</p>
</blockquote>
<p>That definition is a bit of a mess, but the basic idea is that we have some analogue of the fundamental theorem of arithmetic. A good example of a UFD to keep in mind is $\Z[x]$, and in fact more generally</p>
<blockquote>
<p>Theorem<br />
If $U$ is a UFD, then so is $U[x]$</p>
</blockquote>
<div class="proof2">
Pf: Omitted. At this point, I don't think we've quite developed enough theory to prove this nicely, so look up a proof after finishing this post.
</div>
<p>Because I don’t want to wait too long before saying this, and because it’s related previously omitted proof, let’s look at just about the nicest rings known to algebra.</p>
<blockquote>
<p>Definition<br />
A <strong>field</strong> $k$ is a ring such that $(k-\{0\},\cdot)$ is an abelian group. i.e. all non-zero elements of $k$ are units.</p>
</blockquote>
<p>Examples of fields include $\Q, \R, \C,$ and $\qadjs2=\{a+b\sqrt2:a,b\in\Q\}$. Because multiplicative inverses exist, cancellation automatically holds in fields and so all fields are domains. The 4 examples I just gave all have characteristic 0, but fields with prime characteristic exist as well.</p>
<blockquote>
<p>Proposition<br />
$\zmod p$ is a field. More generally, any finite domain is a field.</p>
</blockquote>
<div class="proof2">
Pf: Let $D$ be a finite domain, and fix some non-zero $d\in D$. Consider the map $m_d:D\rightarrow D$ given by $m_d(a)=da$. We claim this map is injective. Pick some $a\in\ker m_d$. Then, $m_d(a)=da=0\implies a=0$ since $d\neq0$ by assumption. Thus, $m_d$ has trivial kernal and hence is injective. Now, any injective map between finite sets is automatically surjective, so $\image m_d=D$. In particular, there exists some $c\in D$ such that $m_d(c)=dc=1$ so $d$ has an inverse. $\square$
</div>
<p>When thinking of $\zmod p$ as a field, we usually denote it $\F_p$ and call it the (finite) field with $p$ elements. This is not the only connection between domains and fields. It’s clear that every subring of a field is a domain, but it turns out the converse also holds.</p>
<blockquote>
<p>Definition<br />
Given a domain $D$, its <strong>field of fractions</strong> $\Frac(D)$ is the fields whose elements are formal symbols $\frac ab$ ($a,b\in D$) modded by the relation $\sim$ with addition and multiplication given by<br /><center>
$$\begin{align*}
\frac ab+\frac cd=\frac{ad+bc}{bd} && \frac ab\frac cd=\frac{ac}{bd} && \frac ab\sim\frac cd\iff ad=bc
\end{align*}$$</center>
Note that we can embed $D$ in its field of fractions via the map $d\mapsto\frac d1$</p>
</blockquote>
<p>It’s up to you to verify that this construction gives an actual field. After that, forgetting fields for a moment, we look again at the definition of a UFD and realize that units can be annoying, so we’ll turn our attention to something a little more unit-agnostic.</p>
<h1 id="ideals">Ideals</h1>
<p>The big payoffs of the group theory post both were related to the idea of quotient groups. In this section, we’ll see how to define the analgous idea of quotient rings.</p>
<blockquote>
<p>Definition<br />
Given a ring $R$, an <strong>ideal</strong> $I\subseteq R$ is an additive subgroup such that $ar\in I$ for all $a\in I$ and $r\in R$.</p>
</blockquote>
<blockquote>
<p>Remark<br />
Depending on the author, ideals may or may not be rings. They are if you don’t require rings to have unity, but aren’t otherwise.</p>
</blockquote>
<p>We could have followed the footsteps of the group theory post and defined ideals as kernels of ring maps, but ideals have more a life of their own that normal subgroups, so this definition is the one always used.</p>
<blockquote>
<p>Proposition<br />
Given a ring map $f:R\rightarrow S$, its kernal $\ker f\subseteq R$ is an ideal.</p>
</blockquote>
<div class="proof2">
Pf: Exercise to reader
</div>
<p>Now, recall that in abelian groups, all subgroups are normal. Luckily for us, all rings are additive abelian groups, so ideals are automatically normal subgroups. This means we can form the quotient group $R/I$ as before; however, the additional condition that $I$ “absorbs” $R$ ensures that we can endow this quotient with a ring structure.</p>
<blockquote>
<p>Definition<br />
Given a ring $R$ and ideal $I\subseteq R$, the <strong>quotient ring</strong> $R/I$ is the quotient group endowed with the following multiplication of cosets<br /><center>$$(a+I)(b+I)=ab+I$$</center></p>
</blockquote>
<blockquote>
<p>Exercise<br />
Verify that this definition gives a well-defined ring.</p>
</blockquote>
<blockquote>
<p>Exercise<br />
Prove the first isomorphism theorem for rings: Given a surjective ring map $f:R\rightarrow S$, we have $R/\ker f\simeq S$</p>
</blockquote>
<p>As there are different types of rings, we have different types of ideals depending on how nice the associated quotient ring is. It’s worth convincing yourself that any ideal containing a unit must be all of $R$, and the only ideals of a field are the trivial ones <sup id="fnref:9"><a href="#fn:9" class="footnote">9</a></sup>.</p>
<blockquote>
<p>Definition<br />
Let $R$ be a ring and $I\subseteq R$ an ideal of $R$. We say that $I$ is <strong>prime</strong> if $R/I$ is an integral domain, and $I$ is <strong>maximal</strong> if $R/I$ is a field.</p>
</blockquote>
<blockquote>
<p>Theorem<br />
Let $R$ be a ring with ideal $I$. Then, $I$ is prime iff $ab\in I\implies a\in I$ or $b\in I$, and $I$ is maximal iff $I\neq R$ and given any ideal $J$ with $I\subseteq J\subseteq R$, either $J=I$ or $J=R$.</p>
</blockquote>
<div class="proof2">
Pf: The statement about prime ideals is left as an exercise, but we'll prove the one about maximal ideals here. $(\rightarrow)$ Assume $I$ is maximal, and pick some ideal $J$ with $I\subseteq J\subseteq R$. Let $f:R\rightarrow R/I$ be the quotient map. It's easily verified that $f(J)$ is an ideal so $f(J)=\{0\}$ or $f(J)=R/I$. In the first case, we have $J\subseteq\ker f=I\implies J=I$. In the second, any preimage of $1\in R/I$ is necessarily a unit, so $J=R$ as it contains a unit. ($\leftarrow$) Conversely, assume that $I\subseteq J\subseteq R\implies J=I$ or $J=R$. Let $f:R\rightarrow R/I$ be the quotient map again, and consider an ideal $\tilde J\subseteq R/I$. It's again easily verified that $f^{-1}(\tilde J)$ is an ideal. Since $0\in\tilde J$, we must have $I\subseteq f^{-1}(\tilde J)\subseteq R$ so $\tilde J=\{0\}$ or $R/I$. This implies that $R/I$ is a field. Indeed, given any nonzero $x\in R/I$, the ideal it generates $(x)=R/I$ so there must be some $y\in R/I$ such that $xy=1$. $\square$
</div>
<p>The above proof makes use of the <strong>ideal generated by x</strong> which is given by $(x)=Rx=\{rx:r\in R\}$. We can generalize this notion to any collection of elements</p>
<blockquote>
<p>Definition<br />
Given a (not necessarily finite) subset $S\subseteq R$, the <strong>ideal generated by S</strong> is the ideal<br /><center>$$\left\{\sum_{s\in S}a_s\cdot s:a_s\in R,\text{all but finitely many }a_s\text{ are zero}\right\}$$</center>
When $S=\{x_1,\dots,x_n\}$ is finite, this is commonly denoted<br /><center>$$(x_1,\dots,x_n)=\sum_{i=1}^nRx_i=\{r_1x_1+\dots+r_nx_n:r_i\in R\}$$</center></p>
</blockquote>
<p>With this, we can define that last special type of ideal in this post.</p>
<blockquote>
<p>Definition<br />
We call an ideal $I\subseteq R$ <strong>principal</strong> (or say it’s <strong>principally generated</strong>) if it is generated by a single element.</p>
</blockquote>
<p>Principal ideals are some of the nicest ideals, and behave very similar to numbers (i.e. elements of $R$). However, they have the added benefit that if you multiply one by a unit, nothing changes. Hence, we arrive at our next kind of ring</p>
<blockquote>
<p>Definition<br />
If $R$ is a domain where every ideal is principal, then we call $R$ a <strong>principal ideal domain</strong>, or <strong>PID</strong>.</p>
</blockquote>
<blockquote>
<p>Theorem<br />
Every PID is a UFD</p>
</blockquote>
<div class="proof2">
Pf: One of my goals this post is to avoid writing any proofs involving UFDs, so omitted.
</div>
<p>Examples of PIDs include $\Z$, and as we’ll see in a moment $k[x]$ for $k$ a field. One thing that is true in general is that $(p)$ is a prime ideal if $p$ is a prime element. Given the following theorem, this means that for PIDS, every prime ideal is maximal.</p>
<blockquote>
<p>Theorem<br />
In a PID, an ideal is maximal iff its generated by an irreducible</p>
</blockquote>
<div class="proof2">
Pf: $(\leftarrow)$ Let $I=(r)$ where $r\in R$ is irreducible and $R$ is a PID. Consider some ideal $J=(a)$ with $I\subseteq J\subseteq R$. Since $r\in(a)$, there must exist some $b\in R$ with $r=ab$. However, because $r$ is irreducible, either $a$ is a unit or $b$ is. If $a$ is a unit, then $J=R$. If $b$ is a unit, then $J=I$ since unit multiplies generate the same ideal. Thus, $I$ is maximal. ($\rightarrow$) Run the same argument in reverse: $I=(r)$ is maximal and $r=ab$ implies $(r)\subseteq(a)\subseteq R$ so fill in the blank. $\square$
</div>
<p>One application of the above theorem is that it let’s us generate fields of varying sizes.</p>
<blockquote>
<p>Exercise<br />
Show that if $f(x)\in\F_p[x]$ is an irreducible polynomial, then $\F_p[x]/(f(x))$ is a finite field of size $p^{\deg f}$</p>
</blockquote>
<p>Showing something is a PID directly can be difficult, so it’s sometimes helpful to instead show the stronger condition that your ring has a Euclidean algorithm on it.</p>
<blockquote>
<p>Definition<br />
A <strong>Euclidean domain</strong> $E$ is an integral domain with a function $f:R-\{0\}\rightarrow\Z_{\ge0}$ such that for any $a,b\in R$ with $b\neq0$, there exists $q,r\in R$ where $a=bq+r$ and $r=0$ or $f(r)<f(b)$.</p>
</blockquote>
<p>In essence, you can perform division in $E$, and there’s a sense in which the remainder is smaller than what you started with. Examples include $\Z$ where $f(n)=|n|$ and any field $k$ with $f(x)=1$. A more interesting example is the Gaussian integers $\Z[i]$ with $f(a+bi)=a^2+b^2$. If you’ve been paying attention, you’ll notice that there was no $R\text{ PID}\implies R[x]\text{ PID}$ theorem; this is because this statement is false (for a counter example, consider $R=\Z$. The ideal $(2,x)\subset\Z[x]$ is not principal). However, with strong assumptions, you can get something almost like this.</p>
<blockquote>
<p>Theorem<br />
If $k$ is a field, then $k[x]$ is a Euclidean domain</p>
</blockquote>
<div class="proof2">
Pf sketch: For your function $f:k[x]-\{0\}\rightarrow\Z_{\ge0}$, you just use $f(p)=\deg p$. With this choice, polynomial long division gets you what you need. Since we're working over a field, you can always scale the leading coefficient of the divisor to cancel out all higher order terms of the dividend so that the remainder has strictly smaller degree. $\square$
</div>
<blockquote>
<p>Theorem<br />
Every ED is a PID</p>
</blockquote>
<div class="proof2">
Pf: Let $E$ be a Euclidean domain with ideal $I$. Pick non-zero $x\in I$ so that $f(x)$ is minimal among elements of $I$. Now, consider any $a\in I$ and divide to get $a=xq+r$ where $r=0$ or $f(r)< f(x)$. We claim that $a\in(x)$. Note that $r=a-xq\in I$ so $f(r)\ge f(x)$ (if $r\neq0$) by minimality of $x$. This means that $r=0$ so $a=xq\in(x)$ as desired and so $I=(x)$ is principal. $\square$
</div>
<p>Hence, the polynomial ring over any field is a PID.</p>
<h1 id="a-glimpse-of-field-theory">A Glimpse of Field Theory</h1>
<p>Hopefully there’s nothing major I forgot to say <sup id="fnref:10"><a href="#fn:10" class="footnote">10</a></sup>. With this last bit, I want to mention one neat result about fields. For this, I’m going to need to assume you know a little linear algebra: specifically, the definition of a vector space over a field, and the fact that every vector space has a basis. We’ll use this to show that the sizes of fields are pretty constrained.</p>
<blockquote>
<p>Definition<br />
Let $F,E$ be fields and assume that $F\subseteq E$. We call $E$ an <strong>extension field</strong> of $F$ and denote this $F/E$</p>
</blockquote>
<p>One of the most important things about extension fields is that if $F/E$ is a field extension, then $F$ is an $E$-vector space! Although it’s not difficult to see, you should verify this claim. It basically boils down to the fact that multiplication is linear.</p>
<blockquote>
<p>Definition<br />
The <strong>degree</strong> of a field extension $E/F$ is $[E:F]=\dim_FE$ the dimension of $F$ as an $E$-vector space</p>
</blockquote>
<p>With that, our last result</p>
<blockquote>
<p>Theorem<br />
Let $E$ be a finite field. Then, $|E|=p^n$ for some prime $p$ and integer $n$</p>
</blockquote>
<div class="proof2">
Pf: Let $p=\Char E$ which must be prime since it's nonzero. Let $F$ be $E$'s so-called $\textit{prime subfield}$ which is the image of the map $\Z\rightarrow E$. Finally, let $n=[E:F]$ and let $e_1,\dots,e_n\in E$ be an $F$-basis for $E$. Then, every element of $E$ can be written uniquely in the form $$a_1e_1+\dots+a_ne_n$$ where $a_i\in F$. Since $|F|=\Char E=p$, and there are $n$ coefficient's to choose, there are $p^n$ expressions of this form and correspondingly $|E|=p^n$. $\square$
</div>
<div class="footnotes">
<ol>
<li id="fn:1">
<p>Also, such objects don’t show up in practice then often (read: ever) <a href="#fnref:1" class="reversefootnote">↩</a></p>
</li>
<li id="fn:2">
<p>Matrices are the standard example of non-commutative rings <a href="#fnref:2" class="reversefootnote">↩</a></p>
</li>
<li id="fn:3">
<p>(multiplicative) inverses are unique when they exists, if its a ring with unity that (-1)a = -a (i.e. -1 times a is the additive inverse of a), etc. <a href="#fnref:3" class="reversefootnote">↩</a></p>
</li>
<li id="fn:4">
<p>While typing this, I realized I don’t know if people ever work in polynomials over non-commutative or non-unital rings <a href="#fnref:4" class="reversefootnote">↩</a></p>
</li>
<li id="fn:6">
<p>This is shorthand for a map that given a polynomial p, returns a function that takes some element r and returns p(r), the result of evaluation p at r <a href="#fnref:6" class="reversefootnote">↩</a></p>
</li>
<li id="fn:5">
<p>coefficient of $x^d$ where $d=\deg p$ <a href="#fnref:5" class="reversefootnote">↩</a></p>
</li>
<li id="fn:7">
<p>This is how I’ve always seen characteristic define, but it leads to confusing notation like n1:=1+1+…+1 (n times), so you write things like n1=(ab)1=(a1)(b1) and it gets annoying to keep track of which 1’s matter and which you can drop because identity. I much prefer making explicit mention to a ring map <a href="#fnref:7" class="reversefootnote">↩</a></p>
</li>
<li id="fn:8">
<p>previously defined <a href="../solving-pell">here</a> <a href="#fnref:8" class="reversefootnote">↩</a></p>
</li>
<li id="fn:9">
<p>the zero ideal and the field itself <a href="#fnref:9" class="reversefootnote">↩</a></p>
</li>
<li id="fn:10">
<p>Worst-case scenario, I just edit this post later on to add in anything missing <a href="#fnref:10" class="reversefootnote">↩</a></p>
</li>
</ol>
</div>This is the second post in a series that serves as an introduction to abstract algebra. In the last one, we defined groups and certain sets endowed with a well-behaved operation. Here, we’ll look at rings which are what you get when your set has two operations defined on it, and we’ll see that much of the theory for groups has a natural analogue in ring theory.