For some reason several classes at MIT this year involve Fourier analysis.
I was always confused about this as a high schooler,
because no one ever gave me the “orthonormal basis” explanation, so here goes.
As a bonus, I also prove a form of Arrow’s Impossibility Theorem using binary Fourier analysis,
and then talk about the fancier generalizations using Pontryagin duality and the Peter-Weyl theorem.
In what follows, we let T=R/Z denote the “circle group”,
thought of as the additive group of “real numbers modulo 1”.
There is a canonical map e:T→C sending
T to the complex unit circle, given by e(θ)=exp(2πiθ).
Disclaimer: I will deliberately be sloppy with convergence issues, in part because I don’t fully understand them myself, and in part because I don’t care.
1. Synopsis
Suppose we have a domain Z and are interested in functions f:Z→C.
Naturally, the set of such functions form a complex vector space.
We like to equip the set of such functions with an positive definite inner product.
The idea of Fourier analysis is to then select an orthonormal basis for this set of functions,
say (eξ)ξ, which we call the characters; the indexing ξ are called frequencies.
In that case, since we have a basis, every function f:Z→C becomes a sum
f(x)=ξ∑f(ξ)eξ
where f(ξ) are complex coefficients of the basis;
appropriately we call f the Fourier coefficients.
The variable x∈Z is referred to as the physical variable.
This is generally good because the characters are deliberately chosen to be nice “symmetric” functions,
like sine or cosine waves or other periodic functions.
Thus we decompose an arbitrarily complicated function into a sum on nice ones.
For convenience, we record a few facts about orthonormal bases.
Proposition 1(Facts about orthonormal bases)
Let V be a complex Hilbert space with inner
form ⟨−,−⟩ and suppose x=∑ξaξeξ and
y=∑ξbξeξ where eξ are an orthonormal basis. Then
⟨x,x⟩aξ⟨x,y⟩=ξ∑∣aξ∣2=⟨x,eξ⟩=ξ∑aξbξ.
2. Common Examples
2.1. Binary Fourier analysis on {±1}n
Let Z={±1}n for some positive integer n,
so we are considering functions f(x1,…,xn) accepting binary values.
Then the functions Z→C form a 2n-dimensional vector space CZ,
and we endow it with the inner form
⟨f,g⟩=2n1x∈Z∑f(x)g(x).
In particular,
⟨f,f⟩=2n1x∈Z∑∣f(x)∣2
is the average of the squares; this establishes also that ⟨−,−⟩ is positive definite.
In that case, the multilinear polynomials form a basis of CZ, that is the polynomials
χS(x1,…,xn)=s∈S∏xs.
Thus our frequency set is actually the subsets S⊆{1,…,n}. Thus, we have a decomposition
f=S⊆{1,…,n}∑f(S)χS.
Example 2(An example of binary Fourier analysis)
Let n=2. Then binary functions {±1}2→C have a
basis given by the four polynomials
1,x1,x2,x1x2.
For example, consider the function f which is 1 at (1,1) and 0 elsewhere. Then we can put
f(x1,x2)=2x1+1⋅2x2+1=41(1+x1+x2+x1x2).
So the Fourier coefficients are f(S)=41 for each of the four S’s.
This notion is useful in particular for binary functions f:{±1}n→{±1};
for these functions (and products thereof), we always have ⟨f,f⟩=1.
It is worth noting that the frequency ∅ plays a special role:
Exercise 3. Show that
f(∅)=∣Z∣1x∈Z∑f(x).
2.2. Fourier analysis on finite groups Z
This is the Fourier analysis used in this post and
this post.
Here, we have a finite abelian group Z, and consider functions Z→C;
this is a ∣Z∣-dimensional vector space. The inner product is the same as before:
⟨f,g⟩=∣Z∣1x∈Z∑f(x)g(x).
Now here is how we generate the characters. We equip Z with a non-degenerate symmetric bilinear form
Z×Z⋅T(ξ,x)↦ξ⋅x.
Experts may already recognize this as a choice of isomorphism between Z and
its Pontryagin dual. This time the characters are given by
(eξ)ξ∈Zwhereeξ(x)=e(ξ⋅x).
In this way, the set of frequencies is also Z,
but the ξ∈Z play very different roles from the “physical” x∈Z.
(It is not too hard to check these indeed form an orthonormal basis in the
function space C∣Z∣, since we assumed that ⋅ is non-degenerate.)
Example 4(Cube roots of unity filter)
Suppose Z=Z/3Z, with the inner form given by ξ⋅x=(ξx)/3.
Let ω=exp(32πi) be a primitive cube root of unity. Note that
eξ(x)=⎩⎨⎧1ωxω2xξ=0ξ=1ξ=2.
Then given f:Z→C with f(0)=a, f(1)=b, f(2)=c, we obtain
f(x)=3a+b+c⋅1+3a+ω2b+ωc⋅ωx+3a+ωb+ω2c⋅ω2x.
In this way we derive that the transforms are
f(0)f(1)f(2)=3a+b+c=3a+ω2b+ωc=3a+ωb+ω2c.
Exercise 5. Show that
f(0)=∣Z∣1x∈Z∑f(x).
Olympiad contestants may recognize the previous example as a “roots of unity filter”,
which is exactly the point. For concreteness, suppose one wants to compute
(01000)+(31000)+⋯+(9991000).
In that case, we can consider the function
w:Z/3→C.
such that w(0)=1 but w(1)=w(2)=0.
By abuse of notation we will also think of w as a function
w:Z↠Z/3→C. Then the sum in question is
In our situation, we have w(0)=w(1)=w(2)=31,
and we have evaluated the desired sum.
More generally, we can take any periodic weight w and use Fourier analysis in
order to interchange the order of summation.
Example 6(Binary Fourier analysis)
Suppose Z={±1}n, viewed as an abelian group under pointwise
multiplication hence isomorphic to (Z/2Z)⊕n.
Assume we pick the dot product defined by
ξ⋅x=21i∑ξixi
where ξ=(ξ1,…,ξn) and x=(x1,…,xn).
We claim this coincides with the first example we gave.
Indeed, let S⊆{1,…,n} and let ξ∈{±1}n which is −1 at positions in S,
and +1 at positions not in S.
Then the character χS form the previous example coincides with the
character eξ in the new notation. In particular, f(S)=f(ξ).
Thus Fourier analysis on a finite group Z subsumes binary Fourier analysis.
2.3. Fourier series for functions L2([−π,π])
Now we consider the space L2([−π,π]) of square-integrable functions
[−π,π]→C, with inner form
⟨f,g⟩=2π1∫[−π,π]f(x)g(x).
Sadly, this is not a finite-dimensional vector space,
but fortunately it is a Hilbert space so we are still fine.
In this case, an orthonormal basis must allow infinite linear combinations,
as long as the sum of squares is finite.
Now, it turns out in this case that
(en)n∈Zwhereen(x)=exp(inx)
is an orthonormal basis for L2([−π,π]). Thus this time the frequency set Z is infinite.
So every function f∈L2([−π,π]) decomposes as
f(x)=n∑f(n)exp(inx)
for f(n).
This is a little worse than our finite examples: instead of a finite sum on the right-hand side,
we actually have an infinite sum.
This is because our set of frequencies is now Z, which isn’t finite.
In this case the f need not be finitely supported,
but do satisfy ∑n∣f(n)∣2<∞.
Since the frequency set is indexed by Z,
we call this a Fourier series to reflect the fact that the index is n∈Z.
Exercise 7. Show once again
f(0)=2π1∫[−π,π]f(x).
Often we require that the function f satisfies f(−π)=f(π),
so that f becomes a periodic function,
and we can think of it as f:T→C.
2.4. Summary
We summarize our various flavors of Fourier analysis in the following table.
TypeBinaryFinite groupFourier seriesPhysical var{±1}nZT or [−π,π]Frequency varSubsets S⊆{1,…,n}ξ∈Z, choice of ⋅,n∈ZBasis functions∏s∈Sxse(ξ⋅x)exp(inx)
In fact, we will soon see that all these examples are subsumed by Pontryagin duality for compact groups G.
3. Parseval and friends
The notion of an orthonormal basis makes several “big-name” results in Fourier analysis quite lucid.
Basically, we can take every result from Proposition 1,
translate it into the context of our Fourier analysis, and get a big-name result.
Corollary 8(Parseval theorem)
Let f:Z→C, where Z is a finite abelian group. Then
ξ∑∣f(ξ)∣2=∣Z∣1x∈Z∑∣f(x)∣2.
Similarly, if f:[−π,π]→C is square-integrable then its Fourier series satisfies
n∑∣f(n)∣2=2π1∫[−π,π]∣f(x)∣2.
Proof: Recall that ⟨f,f⟩ is equal to the square sum of the coefficients. □
Corollary 9(Formulas for f)
Let f:Z→C, where Z is a finite abelian group. Then
f(ξ)=∣Z∣1x∈Z∑f(x)eξ(x).
Similarly, if f:[−π,π]→C is square-integrable then its Fourier series is given by
f(n)=2π1∫[−π,π]f(x)exp(−inx).
Proof: Recall that in an orthonormal basis (eξ)ξ,
the coefficient of eξ in f is ⟨f,eξ⟩. □
Note in particular what happens if we select ξ=0 in the above!
Corollary 10(Plancherel theorem)
Let f:Z→C, where Z is a finite abelian group. Then
⟨f,g⟩=ξ∈Z∑f(ξ)g(ξ).
Similarly, if f:[−π,π]→C is square-integrable then
⟨f,g⟩=n∑f(ξ)g(ξ).
Proof: Guess! □
4. (Optional) Arrow’s Impossibility Theorem
As an application, we now prove a form of Arrow’s
theorem.
Consider n voters voting among 3 candidates A, B, C.
Each voter specifies a tuple vi=(xi,yi,zi)∈{±1}3 as follows:
xi=1 if A ranks A ahead of B, and xi=−1 otherwise.
yi=1 if A ranks B ahead of C, and yi=−1 otherwise.
zi=1 if A ranks C ahead of A, and zi=−1 otherwise.
Tacitly, we only consider 3!=6 possibilities for vi:
we forbid “paradoxical” votes of the form xi=yi=zi by assuming that
people’s votes are consistent (meaning the preferences are transitive).
Then, we can consider a voting mechanism
f:{±1}ng:{±1}nh:{±1}n→{±1}→{±1}→{±1}
such that f(x∙) is the global preference of A vs.
B, g(y∙) is the global preference of B vs.
C, and h(z∙) is the global preference of C vs. A.
We’d like to avoid situations where the global preference
(f(x∙),g(y∙),h(z∙)) is itself paradoxical.
In fact, we will prove the following theorem:
Theorem 11(Arrow Impossibility Theorem)
Assume that (f,g,h) always avoids paradoxical outcomes,
and assume Ef=Eg=Eh=0.
Then (f,g,h) is either a dictatorship or anti-dictatorship: there exists a “dictator” k such that
f(x∙)=±xk,g(y∙)=±yk,h(z∙)=±zk
where all three signs coincide.
The “irrelevance of independent alternatives” reflects that The assumption
Ef=Eg=Eh=0 provides symmetry (and e.g.
excludes the possibility that f, g, h are constant functions which ignore voter input).
Unlike the usual Arrow theorem,
we do not assume that f(+1,…,+1)=+1 (hence possibility of anti-dictatorship).
To this end, we actually prove the following result:
Lemma 12. Assume the n voters vote independently at random among the 3!=6 possibilities.
The probability of a paradoxical outcome is exactly
If S=T, then EχS(x∙)χT(y∙)=0, since if say s∈S,
s∈/T then xs affects the parity of the product with 50% either way,
and is independent of any other variables in the product.
On the other hand, suppose S=T. Then
χS(x∙)χT(y∙)=s∈S∏xsys.
Note that xsys is equal to 1 with probability 31 and −1 with
probability 32 (since (xs,ys,zs) is uniform from 3!=6 choices, which we can enumerate).
From this an inductive calculation on ∣S∣ gives that
s∈S∏xsys={+1−1 with probability 21(1+(−1/3)∣S∣) with probability 21(1−(−1/3)∣S∣).
But now we can just use weak inequalities.
We have f(∅)=Ef=0 and similarly for g and h,
so we restrict attention to ∣S∣≥1.
We then combine the famous inequality ∣ab+bc+ca∣≤a2+b2+c2 (which is
true across all real numbers) to deduce that
with the last step by Parseval.
So all inequalities must be sharp, and in particular f, g,
h are supported on one-element sets, i.e. they are linear in inputs.
As f, g, h are ±1 valued, each f, g,
h is itself either a dictator or anti-dictator function.
Since (f,g,h) is always consistent, this implies the final result.
5. Pontryagin duality
In fact all the examples we have covered can be subsumed as special cases of Pontryagin duality,
where we replace the domain with a general group G.
In what follows, we assume G is a locally compact abelian (LCA) group, which just means that:
G is a abelian topological group,
the topology on G is Hausdorff, and
the topology on G is locally compact: every point of G has a compact neighborhood.
Notice that our previous examples fall into this category:
Example 13(Examples of locally compact abelian groups)
Any finite group Z with the discrete topology is LCA.
The circle group T is LCA and also in fact compact.
The real numbers R are an example of an LCA group which is not compact.
5.1. The Pontryagin dual
The key definition is:
Definition 14. Let G be an LCA group. Then its Pontryagin dual is the abelian group
G:={continuous group homomorphisms ξ:G→T}.
The maps ξ are called characters.
By equipping it with the compact-open topology,
we make G into an LCA group as well.
Example 15(Examples of Pontryagin duals)
Z≅T.
T≅Z.
The characters are given by θ↦nθ for n∈Z.
R≅R.
This is because a nonzero continuous homomorphism R→S1
is determined by the fiber above 1∈S1. (Covering projections, anyone?)
Z/nZ≅Z/nZ,
characters ξ being determined by the image ξ(1)∈T.
G×H≅G×H.
If Z is a finite abelian group,
then previous two examples (and structure theorem for abelian groups) imply that Z≅Z,
though not canonically. You may now recognize that the bilinear form
⋅:Z×Z→Z is exactly a choice of isomorphism Z→Z.
For any group G, the dual of G is canonically isomorphic to G,
id est there is a natural isomorphism
G≅Gbyx↦(ξ↦ξ(x)).
This is the Pontryagin duality theorem.
(It is an analogy to the isomorphism (V∨)∨≅V for vector spaces V.)
5.2. The orthonormal basis in the compact case
Now assume G is LCA but also compact,
and thus has a unique Haar measureμ such that μ(G)=1; this lets us integrate over G.
Let L2(G) be the space of square-integrable functions to C, i.e.
L2(G)={f:G→Csuch that∫G∣f∣2dμ<∞}.
Thus we can equip it with the inner form
⟨f,g⟩=∫Gfgdμ.
In that case, we get all the results we wanted before:
Theorem 16(Characters of G forms an orthonormal basis)
Assume G is LCA and compact. Then G is discrete, and the characters
(eξ)ξ∈Gbyeξ(x)=e(ξ(x))=exp(2πiξ(x))
form an orthonormal basis of L2(G). Thus for each f∈L2(G) we have
f=ξ∈G∑f(ξ)eξ
where
f(ξ)=⟨f,eξ⟩=∫Gf(x)exp(−2πiξ(x))dμ.
The sum ∑ξ∈G makes sense since G is discrete. In particular,
Letting G=Z gives “Fourier transform on finite groups”.
5.3. The Fourier transform of the non-compact case
If G is LCA but not compact, then Theorem 16 becomes false.
On the other hand, it is still possible to define a transform, but one needs to be a little more careful.
The generic example to keep in mind in what follows is G=R.
In what follows, we fix a Haar measure μ for G.
(This μ is no longer unique up to scaling, since μ(G)=∞.)
One considers this time the space L1(G) of absolutely integrable functions.
Then one directly defines the Fourier transform of f∈L1(G) to be
f(ξ)=∫Gfeξdμ
imitating the previous definitions in the absence of an inner product.
This f may not be L1, but it is at least bounded. Then we manage to at least salvage:
Theorem 17(Fourier inversion on L1(G))
Take an LCA group G and fix a Haar measure μ on it.
One can select a unique dual measureμ on G such that if f∈L1(G),
f∈L1(G), the “Fourier inversion formula”
f(x)=∫Gf(ξ)eξ(x)dμ.
holds almost everywhere. It holds everywhere if f is continuous.
Notice the extra nuance of having to select measures,
because it is no longer the case that G has a single distinguished measure.
Despite the fact that the eξ no longer form an orthonormal basis,
the transformed function f:G→C is still often useful.
In particular, they have special names for a few special G:
If G=Z, then G=T,
and this construction gives the poorly named
“DTFT..
5.4. Summary
In summary,
Given any LCA group G, we can transform sufficiently nice functions on G into functions on G.
If G is compact, then we have the nicest situation possible:
L2(G) is an inner product space with ⟨f,g⟩=∫Gfgdμ,
and eξ form an orthonormal basis across ξ∈G.
If G is not compact, then we no longer get an orthonormal basis or even an inner product space,
but it is still possible to define the transform
f:G→C
for f∈L1(G). If f is also in L1(G) we still get a
“Fourier inversion formula” expressing f in terms of f.
We summarize our various flavors of Fourier analysis for various G in the following.
In the first half G is compact, in the second half G is not.
NameBinary Fourier analysisFourier transform on finite groupsDiscrete Fourier transformFourier seriesContinuous Fourier transformDiscrete time Fourier transformDomain G{±1}nZZ/nZT≅[−π,π]RZDual GS⊆{1,…,n}ξ∈Z≅Zξ∈Z/nZn∈Zξ∈Rξ∈T≅[−π,π]Characters∏s∈Sxse(iξ⋅x)e(ξx/n)exp(inx)e(ξx)exp(iξn)
You might notice that the various names are awful.
This is part of the reason I got confused as a high school student:
every type of Fourier series above has its own Wikipedia article.
If it were up to me, we would just use the term “G-Fourier transform”,
and that would make everyone’s lives a lot easier.
6. Peter-Weyl
In fact, if G is a Lie group,
even if G is not abelian we can still give an orthonormal basis of L2(G)
(the square-integrable functions on G).
It turns out in this case the characters are attached to complex irreducible
representations of G (and in what follows all representations are complex).
The result is given by the Peter-Weyl theorem. First, we need the following result:
Lemma 18(Compact Lie groups have unitary reps)
Any finite-dimensional (complex) representation V of a compact Lie group G is unitary,
meaning it can be equipped with a G-invariant inner form.
Consequently, V is completely reducible:
it splits into the direct sum of irreducible representations of G.
Proof: Suppose B:V×V→C is any inner product.
Equip G with a right-invariant Haar measure dg. Then we can equip it with an “averaged” inner form
B(v,w)=∫GB(gv,gw)dg.
Then B is the desired G-invariant inner form.
Now, the fact that V is completely reducible follows from the fact that given a subrepresentation of V,
its orthogonal complement is also a subrepresentation. □
The Peter-Weyl theorem then asserts that the finite-dimensional irreducible
unitary representations essentially give an orthonormal basis for L2(G), in the following sense.
Let V=(V,ρ) be such a representation of G, and fix an orthonormal basis of e1, \dots,
ed for V (where d=dimV). The (i,j)-th matrix coefficient for V is then given by
GρGL(V)πijC
where πij is the projection onto the (i,j)-th entry of the matrix.
We abbreviate πij∘ρ to ρij. Then the theorem is:
Theorem 19(Peter-Weyl)
Let G be a compact Lie group.
Let Σ denote the (pairwise non-isomorphic) irreducible finite-dimensional
unitary representations of G. Then
{dimVρij(V,ρ)∈Σ, and 1≤i,j≤dimV}
is an orthonormal basis of L2(G).
Strictly, I should say Σ is a set of representatives of the isomorphism
classes of irreducible unitary representations, one for each isomorphism class.
In the special case G is abelian, all irreducible representations are one-dimensional.
A one-dimensional representation of G is a map
G↪GL(C)≅C×,
but the unitary condition implies it is actually a map G↪S1≅T, i.e.
it is an element of G.