"Fourier Analysis"

I may start writing posts much more frequently than usual. There is a lot of mathematics that I want to learn this quarter, and somehow it seems that writing these posts is one of the best ways I know to help me absorb everything 1. In this post, I want to do a quick introduction to fourier analysis on the circle $S^1$ and on the real line $\R$. Because of my (lack of a) background in analysis, I may be a little handwavy every now and then 2, but the main ideas will be there. In addition, as I’m stealing this material from Stein typing this all up, I might try to include some footnotes with questions/comments I have about how the ideas here generalize or relate to other parts of mathematics. Finally, this post will lead into another one on the Riemann zeta function 3, so look forward to that.

Fourier Aanalysis on $S^1$

Our goal is to be able to say that we can represent a function $f:S^1\to\C$ (which we view as a $1$-periodic function $f:\R\to\C$ 4) as a fourier series

\[f(x)=\sum_{n=-\infty}^\infty a_ne^{2\pi in}\]

with coefficients given by

\[a_n=\int_0^1f(x)e^{-2\pi inx}\dx.\]

Now, there are some really ugly function $f:S^1\to\C$, so we obviously can’t expect this to hold for all of them. Hence, we will impose the restriction that all our functions are Riemann integrable when considered as functions $[0,1]\to\C$ (i.e. their real and imaginary parts of both Riemann integrable in the following sense).

We say a function $f:[0,1]\to\R$ is Riemann integrable if it is bounded and for every $\eps>0$, there exists a subdivision $P=\bracks{0=x_0< x_1<\cdots< x_n=L}$ of $[0,1]$ so that $\mc U(f, P)-\mc L(f, P)<\eps$ where $$\mc U(f, P)=\sum_{i=1}^n\sqbracks{\sup_{x\in[x_{i-1},x_i]}f(x)}\parens{x_i-x_{i-1}}\text{ and }\mc L(f, P)=\sum_{i=1}^n\sqbracks{\inf_{x\in[x_{i-1},x_i]}f(x)}\parens{x_i-x_{i-1}}$$ are the upper and lower, respectively, sums of $f$ for this subdivision. Whenever $f$ is Riemann integrable, the Riemann integral of $f$ is $$\int_0^1f(x)\dx:=\inf_P\mc U(f,P)=\sup_P\mc L(f,P)$$
All continuous functions $[0,1]\to\C$ are Riemann integrable, but a Riemann integrable function does not have to be continuous; however, if $f$ is Riemann integrable, then its set of discontinuities has measure $0$.

Denote the $\C$-algebra 5 of Riemann integrable (complex-valued) functions on $S^1$ by $\mc R(S^1)$. Given some $f\in\mc R(S^1)$, its $n$th Fourier coefficient is 6

\[\hat f(n)=a(f)_ n:=\int_0^1f(x)e^{-2\pi inx}\dx=\int_0^1f(x)\cos(2\pi nx)\dx+i\int_0^1f(x)\sin(2\pi nx).\]

If we feel so inclined, we can view this construction as a function 7

\[\mapdesc{\wh{}}{\mc R(S^1)}{\C^{\Z}}{f}{\hat f},\]

where $\C^{\Z}$ is the space of all functions $\Z\to\C$. It’s worth noting that while we can define Fourier coefficients for all Riemann integrable functions, it is not the case that the associated fourier series converges to $f$ always. This is immediate once you remember that you can change a function at a single point 8 without changing its integral, so to obtain a nice theory, we’ll need to be more restrictive. To help us decide how restrictive, here’s a nice theorem.

Suppose that $f:S^1\to\C$ (i.e. $f:[0,1]\to\C$ where $f(0)=f(1)$) is Riemann integrable with $a(f)_n=0$ for all $n\in\Z$. Then, $f(x_0)=0$ if $f$ is continuous at $x_0$.
We prove this in the case that $f$ is real-valued. By shifting $f$ and negating if necessary, we may assume that $x_0=\frac12$ and $f(x_0)>0$. Since $f$ is continuous at $x_0$, we can choose $0<\delta\le\frac12$ so that $\abs{f(x)-f(x_0)}< f(x_0)/2\implies f(x)>f(x_0)/2$ whenever $\abs{x-\frac12}<\delta$. Let
$$p(x)=\eps+\cos\parens{2\pi x-\pi}$$ where $\eps>0$ is small enough that $\abs{p(x)}< 1-\eps/2$ whenever $\delta\le\abs{x-\frac12}\le\frac12$ (e.g. $\eps<\frac23\parens{1-\cos(2\pi\delta)}$). Next, fix a positive $\eta<\delta$ s.t. $p(x)\ge1+\eps/2$ for $\abs{x-\frac12}<\eta$ (exists by continuity since $p(1/2)=1+\eps>1+\eps/2$, and let $p_k(x)=p(x)^k$. Finally, fix $B$ so that $\abs{f(x)}\le B$ for all $x$. Each $p_k$ is a trigonometric polynomial, so $\hat f(n)=0$ for all $n$ implies that
$$\int_0^1f(x)p_k(x)\dx=0\text{ for all $k$}.$$ At the same time, our various chosen parameters give us the following integral estimates $$\begin{align*} \abs{\int_{\delta\le\abs{x-\frac12}}f(x)p_k(x)\dx} &\le B(1-\eps/2)^k\\ \int_{\eta\le\abs{x-\frac12}<\delta}f(x)p_k(x)\dx &\ge0\\ \int_{\abs{x-\frac12}<\eta}f(x)p_k(x)\dx &\ge \eta f(x_0)\parens{1+\eps/2}^k \end{align*}.$$ As $k\to\infty$, the top integral approaches $0$, the middle one remains non-negative, and the bottom one appraoches $\infty$. Summing them, we get $$\int_0^1f(x)p_k(x)\dx\to\infty\text{ as }k\to\infty$$ which is a contradiction. When $f$ is not real-valued, let $u(x)=\Re f(x)$ and $v(x)=\Im f(x)$. Then, $$u(x)=\frac{f(x)+\conj f(x)}2\text{ and }v(x)=\frac{f(x)-\conj f(x)}{2i}.$$ Furthermore, $a(\conj f)_n=\conj{a(f)_{-n}}$. Taken together, this means $a(u)_n=\frac12\parens{a(f)_n+\conj{a(f)_{-n}}}=0$ (and similarly, $a(v)_n=0$), so $f$ vanishes.
If $f$ is continuous on the circle and $\hat f(n)=0$ for all $n\in\Z$, then $f=0$.
Suppose that $f$ is a continuous function on the circle whose Fourier series $$\sum_{n=-\infty}^\infty\hat f(n)e^{2\pi inx}$$ is absolutely convergent with the further condition that $\sum_{n\in\Z}\abs{\hat f(n)}<\infty$. Then, the Fourier series converges uniformly to $f$, i.e. $$\lim_{N\to\infty}\sum_{n=-N}^N\hat f(n)e^{2\pi inx}=f(x)\text{ uniformly in }x$$

The proof idea here is that the condition on the coefficients guarantees that the Fourier series converges (absolutely and uniformly) to some continuous function $g(x)$ with the same coefficients as $f$; hence, $f(x)=g(x)$ by the first corollary.

Given two functions $f,g$, we say $f(x)=O(g(x))$, read "$f(x)$ is big-O of $g(x)$," as $x\to a$ if there exists some constant $C$ such that $\abs{f(x)}\le C\abs{g(x)}$ in some neighborhood of $a$. In particular, if $f(x)=O(g(x))$ as $x\to\infty$, then there are constants $C,n$ such that $\abs{f(x)}\le C\abs{g(x)}$ for all $x\ge n$.
If $f\in C^k(S^1)$ (i.e. $f$ is $k$-times-differentiable with continuous $k$th derivative), then $\hat f(n)=O(1/\abs n^k)$ as $\abs n\to\infty$.
Just use integration by parts. For $n\neq0$, we have $$\begin{align*} \hat f(n)=\int_0^1f(x)e^{-2\pi inx}\dx &=\sqbracks{\frac{-f(x)}{2\pi i n}}_0^1+\frac1{2\pi i n}\int_0^1f'(x)e^{-2\pi inx}\dx\\ &=\frac1{2\pi i n}\int_0^1f'(x)e^{-2\pi inx}\dx\\ &=\cdots\\ &=\frac1{(2\pi in)^k}\int_0^1f^{(k)}(x)e^{-2\pi inx}\dx \end{align*}$$ where the bracket quantites vanish since $f^{(n)}(0)=f^{(n)}(1)$ for all $n$. Fixing $B\in\R_{>0}$ such that $\abs{f^{(k)}(x)}\le B$ for all $x$, this means that $$\abs{\hat f(n)}\le\frac B{(2\pi\abs n)^k}=O(\abs n^{-k})$$
If $f\in C^k(S^1)$ for $k\ge2$, then the Fourier series of $f$ converges absolutely and uniformly to $f$.

There’s more that can be said here, but my main goal is to get to Poisson summation, and I think we’ve developed all the theory on $S^1$ that we need for that, so let’s move on$\dots$ after a few remarks.

The first thing we’ll do is update our description of $\wh{}$ as a map on function spaces. Letting $\mc S(\Z)$ denote the space of functions $f:\Z\to\C$ such that $\sum_{n\in\Z}f(n)<\infty$, we can view our work here as showing that the function

\[\mapdesc{\wh{}}{C^2(S^1)}{\mc S(\Z)}{f}{\wh f}\]

is injective.

The second thing we’ll do is give a little intuition for the formula for Fourier coefficients, i.e. why take

\[a_n=\int_0^1f(x)e^{-2\pi inx}\dx?\]

The idea is that the functions $g_n(x)=e^{2\pi inx}$ as $n$ varies over $\Z$ are pairwise orthogonal (and have norm $1$) with respect to the following inner product on $\mc R(S^1)$:

\[\angled fg=\int_0^1f(x)\conj{g(x)}\dx.\]

This means that if we can represent some function $f\in\mc R(S^1)$ as $f(x)=\sum_{d\in\Z}c_de^{2\pi idx}$, then we must have

\[\int_0^1f(x)e^{-2\pi inx}\dx=\angled f{g_n}=\angled{\sum_{d\in\Z}c_dg_d}{g_n}=\sum_{d\in\Z}c_d\angled{g_d}{g_n}=c_n\angled{g_n}{g_n}=c_n.\]

Fourier Analysis on $\R$

This time around, we’ll start off with a nice space of functions.

The Schwartz space on $\R$ is the set of all smooth (i.e. infinitely differentiable) functions on $f$ that are rapidly decreasing in the sense that $$\sup_{x\in\R}\abs x^k\abs{f^{(\l)}(x)}<\infty\text{ for every }k,\l\ge0.$$ We denote this space by $\mc S(\R)$. Note that it is a $\C$-vector space.
It's clear from the definition that $f(x)\in\mc S(\R)\implies f'(x)\in\mc S(\R)$ and $xf(x)\in\mc S(\R)$. Hence, $\mc S(\R)$ is closed under differentiation and polynomial multiplication. Put another way, $\C[x]$-module that is closed under differentiation. Put another way, $\mc S(\R)$ is a $\C[x,D]$-module where $D$ acts via the differentiation operator.
$f(x)=e^{-x^2}\in\mc S(\R)$. This is because $P(x)e^{-x^2}\to0$ as $\abs x\to\infty$ for any polynomial $P$ (in particular, for $P(x)=x^k$), and an easy induction argument shows that every derivative of $f$ is of the form $P(x)e^{-x^2}$. In fact, $f_a(x)=e^{-ax^2}\in\mc S(\R)$ for every $a>0$.

One (though certainly not the only) nice property of Schwartz functions is that they decay fast enough to have a finite integral over all of $\R$. That is, if $f\in\mc S(\R)$, then

\[\lim_{N\to\infty}\int_{-N}^Nf(x)\dx\]

exists and is finite. To see this, let $I_N=\int_{-N}^Nf(x)\dx$, so we only need to show that $I_N$ is Cauchy.

If $f\in\mc S(\R)$, then there exists some $N>0$ s.t. $x\ge N\implies\abs{f(x)}\le1/x^2$.
Suppose not, so there exists arbitrarirly large $x\in\R$ with $\abs{f(x)}>1/x^2$. This means we can find some sequences $\{a_n\}$ of real numbers such that $\lim\abs{a_n}=\infty$ and $\abs{a_n}^3\abs{f(a_n)}>\abs{a_n}$ for all $n$. However, this contradicts $$\sup_{x\in\R}\abs x^3\abs{f(x)}<\infty,$$ so we win.

Given that lemma, fix $N$ large enough that $x\ge N\implies\abs{f(x)}\le1/x^2$, and note that for $m>n\ge N$ we have

\[\abs{I_m-I_n}=\abs{\int_{n\le\abs x\le m}f(x)\dx}\le{\int_{n\le\abs x\le m}x^{-2}\dx}\le\sqbracks{-\frac1x}_{-m}^{-n}+\sqbracks{-\frac1x}_n^m=2\parens{\frac1n-\frac1m}\le\frac2N\to0\]

so ${I_n}$ is indeed Caucy, and we can safely define

\[\int_{\R}f(x)\dx=\int_{-\infty}^\infty f(x)\dx=\lim_{N\to\infty}\int_{-N}^Nf(x)\dx.\]
The Fourier transform of a function $f\in\mc S(\R)$ is defined by $$\hat f(\xi)=\int_{-\infty}^\infty f(x)e^{-2\pi ix\xi}\dx.$$ We will sometimes denote this by $\mc F(f)(\xi)=\hat f(\xi)$.
The Fourier transform enjoys the following list of properties.
  1. $\mc F(f(x+h))(\xi)=\hat f(\xi)e^{2\pi ih\xi}$ when $h\in\R$.
  2. $\mc F(f(x)e^{-2\pi ixh})(\xi)=\hat f(\xi+h)$ when $h\in\R$.
  3. $\mc F(f(\delta x))(\xi)=\inv\delta\hat f(\inv\delta\xi)$ when $\delta>0$.
  4. $\mc F(f')(\xi) = 2\pi i\xi\hat f(\xi)$.
  5. $\mc F(-2\pi ixf(x))(\xi) = \frac{\d}{\d\xi}\hat f(\xi)$.
So the Fourier transform (roughly) turns differentiation into mulitplication by $x$, and shifting into multiplication by $e^{hx}$.
Exercise.
If $f\in\mc S(\R)$, then $\hat f\in\mc S(\R)$.
Note that $\abs{\hat f(\xi)}\le\int_{\R}\abs{f(x)}\dx<\infty$, so the $f\in\mc S(\R)\implies\hat f$ is bounded. Now, for any $\l,k\in\Z_{\ge0}$, we have that $$\xi^k\parens{\frac{\d}{\d\xi}}^\l\hat f(\xi)$$ is bounded since it is the Fourier transform of $$\frac1{(2\pi i)^k}\parens{\frac{\d}{\dx}}^k\sqbracks{(-2\pi ix)^\l f(x)}.$$

This post is more about ideas than details, so let’s just state the good stuff.

If $f\in\mc S(\R)$, then $$f(x)=\int_{-\infty}^\infty\hat f(\xi)e^{2\pi ix\xi}\d\xi.$$
Omitted. See the book by Stein and Shakarchi.

Note that, like last time, we can view our work here as showing some function is “nice.” In this instance, we have that

\[\mapdesc{\mc F}{\mc S(\R)}{\mc S(\R)}{f}{\mc F(f)}\]

is a $\C$-vector space isomorphism.

Poisson Summation

To end things, we’ll relate fourier tranforms and fourier series in a neat way. Fix some function $f\in\mc S(\R)$ on the real line, and imagine you want to convert this into some function on the circle. One thing you could try is defining

\[F_1(x)=\sum_{n=-\infty}^\infty f(x+n),\]

which is obviously $1$-periodic (the series converges since $f$ decays rapidly). Alternatively, inspired by Fourier Inversion

\[f(x)=\int_{\R}\hat f(\xi)e^{2\pi ix\xi}\d\xi,\]

you could try creating a periodic version of $f$ by considering some discrete analouge of Fourier inversion:

\[F_2(x)=\sum_{n=-\infty}^{\infty}\wh f(n)e^{2\pi inx}.\]

As it turns out, these two approaches are equivalent.

If $f\in\mc S(\R)$, then $$\sum_{n=-\infty}^\infty f(x+n)=\sum_{n=-\infty}^\infty \hat f(n)e^{2\pi inx}.$$ In particular, setting $x=0$ gives $$\sum_{n=-\infty}^\infty f(n)=\sum_{n=-\infty}^\infty\hat f(n).$$
Since both sides are continuous, it suffices to show they have the same Fourier coefficients. Unsurprisingly, the $m$th Fourier coefiicient of the RHS is $\hat f(m)$. On the LHS, we have $$\begin{align*} \int_0^1\parens{\sum_{n=-\infty}^\infty f(x+n)}e^{-2\pi imx}\dx &=\sum_{n=-\infty}^\infty\int_0^1f(x+n)e^{-2\pi imx}\dx\\ &=\sum_{n=-\infty}^\infty\int_n^{n+1}f(x)e^{-2\pi imx}\dx\\ &=\int_{-\infty}^\infty f(x)e^{-2\pi imx}\dx\\ &=\hat f(m) \end{align*}$$ where we were allowed to change the sum and the integral because $f$ is rapidly decreasing.

As an application of this, consider the theta function

\[\vartheta(s)=\sum_{n=-\infty}^\infty e^{-\pi n^2s}\]

defined for $s>0$ (or for $s\in\C$ with $\Re(s)>0$ if you’re feeling adventurous). Because the function $f(x)=e^{-\pi sx^2}$ is in $\mc S(\R)$ (e.g. $s>0$), and because $\hat f(\xi)=s^{-1/2}e^{-\pi\xi^2/s}$, we can apply Poisson summation to get

\[\sum_{n=-\infty}^\infty e^{-\pi sn^2} = s^{-1/2}\sum_{n=-\infty}^\infty e^{-\pi n^2/s}.\]

Written in terms of $\vartheta$, this says that $\vartheta(s)=s^{-1/2}\vartheta(1/s)$. This will be useful when looking at the Riemann zeta function.

  1. Of course, when things really get going and I’m regularly doing psets and whatnot, blogging may seem less necessary (and logistically possible) than right now 

  2. If you’re reading this post, it’s probably better to think of it as motivation for learning about fourier analysis instead of as an introduction to fourier analysis 

  3. And hopefully writing this post and the next will provide me with decent motivation/context for studying Tate’s thesis where he uses fourier analysis on some number-theoretic groups to prove results about their attached zeta functions and whatnot 

  4. This is justified because the circle S^1 is just R/Z as a (topological) group, or because S^1 is obtained from [0,1] by joining the endpoints. Use whichever justification you prefer; they’re not that different. 

  5. I think this is furthermore a Banach algebra with inner product (f,g) = \int_0^1(f\bar g)dx but I haven’t checked this 

  6. f is complex-valued, so the following is not a decomposition of \hat f(n) into a real and imaginary part 

  7. Secretly the circle S^1 and the integers Z are somehow dual in a way that can be made precise if study Fourier analysis sufficiently generally 

  8. or any set of measure 0 

comments powered by Disqus