Now that we have a good dual basis for linear functionals on our tangent space, we can use it to express an important functional - the gradient of a function defined on our surface.
The Gradient
Consider a function F({ui}) on our surface. We can look at the rate of change Ai of F along the direction of our tangent vectors ei, and form the linear functional f where
f(ei)=Ai=∂ui∂F({ui})
Since f is a linear function, we can express it in the dual basis. f is called the gradient of F, written ∇F, and then
∇F=i∑Aiei=i∑∂ui∂F({ui})ei
and now for any vector v=∑ixiej in our tangent space, the change in f when you displace by v is
This is how much F changes with an (infinitesimal) displacement by a vector v in the tangent space.
So for any unit vector v, ∇F⋅v is the rate of change of F along v.
A Geometric Interpretation of the Gradient
This is the Directional derivative variously defined. For our purposes here we will define it for any nonzero vector v, putting the normalization into the definition.
∇vF=∇F⋅∣v∣v
With this definition, for all λ=0,
∇vF=∇λvF
and thus for any set of {λv0} we can pick a λ and thus a v=λv0 with
λ=∣v0∣∇v0F⟹∣v∣=∇vF
Now we can consider set D of all vectors such vectors v in the tangent space such that
∣v∣∣∇F⋅v∣=∣v∣
For ∇F, we also have
∣∇F∣∇F⋅∇F=∣∇F∣∇F⋅∇F=∣∇F∣∣∇F∣2=∣∇F∣
So ∇F is in D
But also ∇F is the vector of largest length in D. To prove this we will need the Cauchy-Schwarz Inequality
u⋅v=⟨u,v⟩≤∣u∣∣v∣
A Proof of the Cauchy-Schwarz Inequality
Form a quadratic function p(t)
p(t)=⟨tu+v,tu+v⟩=t2⟨u,u⟩+2t⟨u,v⟩+⟨v,v⟩
Since our quadratic form ⟨,⟩ is never negative, we have
t2⟨u,u⟩+2t⟨u,v⟩+⟨v,v⟩≥0
This is a parabola that has a minimum at p′(t)=0
t=−⟨u,u⟩⟨u,v⟩
so plugging in this t we have
⟨u,u⟩⟨u,v⟩2−2⟨u,u⟩⟨u,v⟩2+⟨v,v⟩≥0
Multiplying by ⟨u,u⟩ and simplifying and rearranging
⟨u,u⟩⟨v,v⟩≥⟨u,v⟩2
Taking square roots and reversing the order
⟨u,v⟩≤⟨u,u⟩⟨v,v⟩=∣u∣∣v∣
The steepest direction
And now we can consider any v with
∣v∣∇F⋅v=∣v∣
and then apply Cauchy-Schwarz
∣v∣2=∇F⋅v≤∣∇F∣∣v∣
and so
∣v∣≤∣∇F∣
Two Interpretations of the Gradient
Now we have two ways to think about the gradient of a function F.
In the dual space, ∇F is a linear functional that describes the infinitesimal change of F according to the partial derivatives of F along the tangent basis vectors ei.
In the tangent space, ∇F is a vector in the direction of the steepest change of F, whose length is equal to the rate of change of F in that direction.