vEnhance's avatar

Dec 17, 2015

🖉 Uniqueness of solutions for diffeq's

Let VV be a normed finite-dimensional real vector space and let UVU \subseteq V be an open set. A vector field on UU is a function ξ:UV\xi : U \rightarrow V. (In the words of Gaitsgory: “you should imagine a vector field as a domain, and at every point there is a little vector growing out of it.”)

The idea of a differential equation is as follows. Imagine your vector field specifies a velocity at each point. So you initially place a particle somewhere in UU, and then let it move freely, guided by the arrows in the vector field. (There are plenty of good pictures online.) Intuitively, for nice ξ\xi it should be the case that the trajectory resulting is unique. This is the main take-away; the proof itself is just for completeness.

This is a so-called differential equation:

Definition 1. Let γ:(ε,ε)U\gamma : (-\varepsilon, \varepsilon) \rightarrow U be a continuous path. We say γ\gamma is a solution to the differential equation defined by ξ\xi if for each t(ε,ε)t \in (-\varepsilon, \varepsilon) we have γ(t)=ξ(γ(t)).\gamma'(t) = \xi(\gamma(t)).

Example 2 (Examples of DE’s)

Let U=V=RU = V = \mathbb R.

  1. Consider the vector field ξ(x)=1\xi(x) = 1. Then the solutions γ\gamma are just γ(t)=t+c\gamma(t) = t+c.
  2. Consider the vector field ξ(x)=x\xi(x) = x. Then γ\gamma is a solution exactly when γ(t)=γ(t)\gamma'(t) = \gamma(t). It’s well-known that γ(t)=cexp(t)\gamma(t) = c\exp(t).

Of course, you may be used to seeing differential equations which are time-dependent: i.e. something like γ(t)=t\gamma'(t) = t, for example. In fact, you can hack this to fit in the current model using the idea that time is itself just a dimension. Suppose we want to model γ(t)=F(γ(t),t)\gamma'(t) = F(\gamma(t), t). Then we instead consider ξ:V×RV×Rbyξ(v,t)=(F(v,t),1)\xi : V \times \mathbb R \rightarrow V \times \mathbb R \qquad\text{by}\qquad \xi(v, t) = (F(v,t), 1) and solve the resulting differential equation over V×RV \times \mathbb R. This does exactly what we want. Geometrically, this means making time into another dimension and imagining that our particle moves at a “constant speed through time”.

The task is then mainly about finding which conditions guarantee that our differential equation behaves nicely. The answer turns out to be:

Definition 3. The vector field ξ:UV\xi : U \rightarrow V satisfies the Lipschitz condition if ξ(x)ξ(x")Λxx"\left\lVert \xi(x')-\xi(x") \right\rVert \le \Lambda \left\lVert x'-x" \right\rVert holds identically for some fixed constant Λ\Lambda.

Note that continuously differentiable implies Lipschitz.

Theorem 4 (Picard-Lindelöf)

Let VV be a finite-dimensional real vector space, and let ξ\xi be a vector field on a domain UVU \subseteq V which satisfies the Lipschitz condition.

Then for every x0Ux_0 \in U there exists (ε,ε)(-\varepsilon,\varepsilon) and γ:(ε,ε)U\gamma : (-\varepsilon,\varepsilon) \rightarrow U such that γ(t)=ξ(γ(t))\gamma'(t) = \xi(\gamma(t)) and γ(0)=x0\gamma(0) = x_0. Moreover, if γ1\gamma_1 and γ2\gamma_2 are two solutions and γ1(t)=γ2(t)\gamma_1(t) = \gamma_2(t) for some tt, then γ1=γ2\gamma_1 = \gamma_2.

In fact, Peano’s existence theorem says that if we replace Lipschitz continuity with just continuity, then γ\gamma exists but need not be unique. For example:

Example 5 (Counterexample if ξ\xi is not differentiable)

Let U=V=RU = V = \mathbb R and consider ξ(x)=x23\xi(x) = x^{\frac23}, with x0=0x_0 = 0. Then γ(t)=0\gamma(t) = 0 and γ(t)=(t/3)3\gamma(t) = \left( t/3 \right)^3 are both solutions to the differential equation γ(t)=γ(t)23.\gamma'(t) = \gamma(t)^{\frac 23}.

Now, for the proof of the main theorem. The main idea is the following result (sometimes called the contraction principle).

Lemma 6 (Banach Fixed-Point Theorem)

Let (X,d)(X,d) be a complete metric space. Let f:XXf : X \rightarrow X be a map such that d(f(x1),f(x2))<12d(x1,x2)d(f(x_1), f(x_2)) < \frac{1}{2} d(x_1, x_2) for any x1,x2Xx_1, x_2 \in X. Then ff has a unique fixed point.

For the proof of the main theorem, we are given x0Vx_0 \in V. Let XX be the metric space of continuous functions from (ε,ε)(-\varepsilon, \varepsilon) to the complete metric space B(x0,r)\overline{B}(x_0, r) which is the closed ball of radius rr centered at x0x_0. (Here r>0r > 0 can be arbitrary, so long as it stays in UU.) It turns out that XX is itself a complete metric space when equipped with the sup norm d(f,g)=supt(ε,ε)f(t)g(t).d(f, g) = \sup_{t \in (-\varepsilon, \varepsilon)} \left\lVert f(t)-g(t) \right\rVert. This is well-defined since B(x0,r)\overline{B}(x_0, r) is compact.

We wish to use the Banach theorem on XX, so we’ll rig a function Φ:XX\Phi : X \rightarrow X with the property that its fixed points are solutions to the differential equation. Define it by, for every γX\gamma \in X, Φ(γ):tx0+0tξ(γ(s))ds.\Phi(\gamma) : t \mapsto x_0 + \int_0^t \xi(\gamma(s)) ds. This function is contrived so that (Φγ)(0)=x0(\Phi\gamma)(0) = x_0 and Φγ\Phi\gamma is both continuous and differentiable. By the Fundamental Theorem of Calculus, the derivative is exhibited by (Φγ)(t)=(0tξ(γ(s))ds)=ξ(γ(t)).(\Phi\gamma)'(t) = \left( \int_0^t \xi(\gamma(s)) ds \right)' = \xi(\gamma(t)). In particular, fixed points correspond exactly to solutions to our differential equation.

A priori this output has signature Φγ:(ε,ε)V\Phi\gamma : (-\varepsilon,\varepsilon) \rightarrow V, so we need to check that Φγ(t)B(x0,r)\Phi\gamma(t) \in \overline{B}(x_0, r). We can check that

(Φγ)(t)x0=0tξ(γ(s))ds0tξ(γ(s))dstmaxs[0,t]ξγ(s)<εA \begin{aligned} \left\lVert (\Phi\gamma)(t) - x_0 \right\rVert &=\left\lVert \int_0^t \xi(\gamma(s)) ds \right\rVert \\ &\le \int_0^t \left\lVert \xi(\gamma(s)) ds \right\rVert \\ &\le t \max_{s \in [0,t]} \left\lVert \xi\gamma(s) \right\rVert \\ &< \varepsilon \cdot A \end{aligned}

where A=maxxB(x0,r)ξ(x)A = \max_{x \in \overline{B}(x_0,r)} \left\lVert \xi(x) \right\rVert; we have A<A < \infty since B(x0,r)\overline{B}(x_0,r) is compact. Hence by selecting ε<r/A\varepsilon < r/A, the above is bounded by rr, so Φγ\Phi\gamma indeed maps into B(x0,r)\overline{B}(x_0, r). (Note that at this point we have not used the Lipschitz condition, only that ξ\xi is continuous.)

It remains to show that Φ\Phi is contracting. Write

(Φγ1)(t)(Φγ2)(t)=s[0,t](ξ(γ1(s))ξ(γ2(s)))=s[0,t]ξ(γ1(s))ξ(γ2(s))tΛsups[0,t]γ1(s)γ2(s)<εΛsups[0,t]γ1(s)γ2(s)=εΛd(γ1,γ2). \begin{aligned} \left\lVert (\Phi\gamma_1)(t) - (\Phi\gamma_2)(t) \right\rVert &= \left\lVert \int_{s \in [0,t]} \left( \xi(\gamma_1(s))-\xi(\gamma_2(s)) \right) \right\rVert \\ &= \int_{s \in [0,t]} \left\lVert \xi(\gamma_1(s))-\xi(\gamma_2(s)) \right\rVert \\ &\le t\Lambda \sup_{s \in [0,t]} \left\lVert \gamma_1(s)-\gamma_2(s) \right\rVert \\ &< \varepsilon\Lambda \sup_{s \in [0,t]} \left\lVert \gamma_1(s)-\gamma_2(s) \right\rVert \\ &= \varepsilon\Lambda d(\gamma_1, \gamma_2) . \end{aligned}

Hence once again for ε\varepsilon sufficiently small we get εΛ12\varepsilon\Lambda \le \frac{1}{2}. Since the above holds identically for tt, this implies d(Φγ1,Φγ2)12d(γ1,γ2)d(\Phi\gamma_1, \Phi\gamma_2) \le \frac{1}{2} d(\gamma_1, \gamma_2) as needed.

This is a cleaned-up version of a portion of a lecture from Math 55b in Spring 2015, instructed by Dennis Gaitsgory.