It is common to approach problems via approximation in analysis. Often, we know how to solve the problem in special cases.
This allows us to obtain a sequence of approximate solutions and the hope is to get a limit which will solve our original problem.
(1/)
This allows us to obtain a sequence of approximate solutions and the hope is to get a limit which will solve our original problem.
(1/)
An example would be the Galerkin method for parabolic PDEs in divergence form, where we obtain solutions in finite dimensional spaces.
By "projecting" the problem in a finite dim space (special case), one then only has to solve an ODE instead of a PDE.
(2/)
By "projecting" the problem in a finite dim space (special case), one then only has to solve an ODE instead of a PDE.
(2/)
Through compactness, we obtain a convergent subsequence (in the weak topology) and use Sobolev embeddings to obtain strong convergence.
The limit, a priori, will only be a weak solution, i.e., less regularity and only solves the PDE in an "average" sense.
(3/)
The limit, a priori, will only be a weak solution, i.e., less regularity and only solves the PDE in an "average" sense.
(3/)
One can then sometimes show that the solution actually has better regularity, and can even be classical.
For this thread, we consider a more abstract approach that allows us to obtain solutions with better regularity a priori.
We look at Yosida approximations.
(4/)
For this thread, we consider a more abstract approach that allows us to obtain solutions with better regularity a priori.
We look at Yosida approximations.
(4/)
From here on out, H is a Hilbert space.
Let A: D(A) \\subset H-->H be a (an unbounded) linear operator. Here D(A) is the domain of A, i.e., the elements in H for which A is defined.
We say that A is monotone if (Av,v)>=0 for all v in D(A).
(5/)
Let A: D(A) \\subset H-->H be a (an unbounded) linear operator. Here D(A) is the domain of A, i.e., the elements in H for which A is defined.
We say that A is monotone if (Av,v)>=0 for all v in D(A).
(5/)
If in addition, we have that R(I+A)=H, i.e., I+A is surjective, where I is the identity operator, we say that A is maximal monotone.
Convex analysis texts would define these differently. Roughly, MMO would be the "largest" monotone operator that contains the graph of A.
(6/)
Convex analysis texts would define these differently. Roughly, MMO would be the "largest" monotone operator that contains the graph of A.
(6/)
It was proven by Minty in the 60s that maximal monotone is equivalent to R(I+A)=H.
MMOs are very nice. It guarantees that D(A) is dense in H, A is closed, I+cA is bijective for all c>0, etc.
The last one allows us to define the resolvent of A:
J_c:=(I+cA)^-1, c>0
(7/)
MMOs are very nice. It guarantees that D(A) is dense in H, A is closed, I+cA is bijective for all c>0, etc.
The last one allows us to define the resolvent of A:
J_c:=(I+cA)^-1, c>0
(7/)
We now introduce the Yosida approximation of A:
A_c:=1/c * (I-J_c)
If you formally treat these as "numbers", one has that J_c=1/(1+cA) and A_c=A/(1+cA). So you would kinda expect that as c->0, J_c->I and A_c->A and in fact they do!
(8/)
A_c:=1/c * (I-J_c)
If you formally treat these as "numbers", one has that J_c=1/(1+cA) and A_c=A/(1+cA). So you would kinda expect that as c->0, J_c->I and A_c->A and in fact they do!
(8/)
The KEY to this approximation theme of solving problems is that these notions give us a family of BOUNDED operators that approximates an unbounded operator.
And this is important because PDEs can be recast as ODEs in infinite dimensional spaces.
(9/)
And this is important because PDEs can be recast as ODEs in infinite dimensional spaces.
(9/)
Initial value problems can be written as an ODE in an infinite dim space H:
u'+Au=0 in (0,+\\infty)
u(0)=u_0,
where u(t) is an element of H for each t.
Example: If A=Laplacian, we get a parabolic PDE.
Now, if A is LIPSCHITZ, we can solve this as follows:
(10/)
u'+Au=0 in (0,+\\infty)
u(0)=u_0,
where u(t) is an element of H for each t.
Example: If A=Laplacian, we get a parabolic PDE.
Now, if A is LIPSCHITZ, we can solve this as follows:
(10/)
The solution is:
u(t)=u_0-\\int_0^t A(u(s)) ds.
This motivates us to define:
G(v)(t):=u_0-\\int_0^t A(v(s)) ds.
We then look for a FIXED POINT of this map . Through Lipschitz cont, we can show that G is a contraction (in some space) for a short enough time interval.
(11/)
u(t)=u_0-\\int_0^t A(u(s)) ds.
This motivates us to define:
G(v)(t):=u_0-\\int_0^t A(v(s)) ds.
We then look for a FIXED POINT of this map . Through Lipschitz cont, we can show that G is a contraction (in some space) for a short enough time interval.
(11/)
In PDEs, A is rarely Lipschitz! Take for example when A is the Laplacian.
Let u_n(x)=x^n. Then Au_n(x)=n(n-1)x^n-2. The H^2 (0,1) norm of this is bounded below by (I think hehe) (n^2*(n-1)^2)/(2n-3) and this blows up as n->\\infty, i.e., A is unbounded!
(12/)
Let u_n(x)=x^n. Then Au_n(x)=n(n-1)x^n-2. The H^2 (0,1) norm of this is bounded below by (I think hehe) (n^2*(n-1)^2)/(2n-3) and this blows up as n->\\infty, i.e., A is unbounded!
(12/)
So what do we do? Here comes the Yosida approximation!
Instead of working with A, which is unbounded, we work with A_c, which is bounded (and Lipschitz).
Indeed, we consider:
u'+A_c(u)=0 in (0,\\infty)
u(0)=u_0
By the previous arguments, this has a solution, u_c.
(13/)
Instead of working with A, which is unbounded, we work with A_c, which is bounded (and Lipschitz).
Indeed, we consider:
u'+A_c(u)=0 in (0,\\infty)
u(0)=u_0
By the previous arguments, this has a solution, u_c.
(13/)
We then ask, is this sequence convergent? And if so, is the limit the solution we are looking for?
The answers are sort-of-yes to both questions.
Before we proceed, we emphasize what's special about this:
(14/)
The answers are sort-of-yes to both questions.
Before we proceed, we emphasize what's special about this:
(14/)
Roughly, the solvability of an EVOLUTION problem depends on whether we can solve a STATIONARY problem:
Indeed, the operator A being maximal monotone is a STATIONARY problem. We are asking that for a given f in H, is there some v such that Av+v=f?
Isn't that just great?
(15/)
Indeed, the operator A being maximal monotone is a STATIONARY problem. We are asking that for a given f in H, is there some v such that Av+v=f?
Isn't that just great?
(15/)
Going back, through estimates one can show that the WHOLE sequence u_c converges strongly in C([0,\\infty); H) and this is strong!
We can then show that u'_c is also strongly convergent, provided that the initial data has better regularity, i.e., u_0 is in D(A^2)
(16/)
We can then show that u'_c is also strongly convergent, provided that the initial data has better regularity, i.e., u_0 is in D(A^2)
(16/)
So we suppose that it is for the moment. Now the Yosida approximations has this nice property:
A_c(v)=A(J_c v).
Thus, we have that u_c's satisfy
u_c'+ A(J_c u_c)=0.
A being a MMO implies that (a) A is a closed operator (b) J_c v-> v. Combine these-
(17/)
A_c(v)=A(J_c v).
Thus, we have that u_c's satisfy
u_c'+ A(J_c u_c)=0.
A being a MMO implies that (a) A is a closed operator (b) J_c v-> v. Combine these-
(17/)
-with the strong convergences of u_c's and u'_c's we get that the limit u satisfies
u'+Au=0
u(0)=u_0
Note: the initial condition is satisfied because u_c(0)=u_0 for every c.
Now, this is true provided u_0 is in D(A^2), i.e., Au_0 is in D(A). What does this mean?
(18/)
u'+Au=0
u(0)=u_0
Note: the initial condition is satisfied because u_c(0)=u_0 for every c.
Now, this is true provided u_0 is in D(A^2), i.e., Au_0 is in D(A). What does this mean?
(18/)
This means that u_0 needs HIGHER regularity. A "lowers" this. Example: the function
f(x)=x^2 for x>=0 and -x^2 for x<0
is diff at 0. Its derivative is 2|x| which is not differentiable at zero.
Hence, we want u_0 to be smooth enough so that Au_0 is still in D(A)
(19/)
f(x)=x^2 for x>=0 and -x^2 for x<0
is diff at 0. Its derivative is 2|x| which is not differentiable at zero.
Hence, we want u_0 to be smooth enough so that Au_0 is still in D(A)
(19/)
So, what do we do? We go back to the resolvent operator!
Given u_0 in D(A), we take J_c u_0.
One can show that this is in D(A^2) and that this converges to u_0.
Thus, for each c, we have some u_c solving
u'_c+Au_c=0
u_c(0)=J_c u_0.
(20/)
Given u_0 in D(A), we take J_c u_0.
One can show that this is in D(A^2) and that this converges to u_0.
Thus, for each c, we have some u_c solving
u'_c+Au_c=0
u_c(0)=J_c u_0.
(20/)
Through estimates already obtained in previous steps, we show again strong convergence of u_c's and u'_c's to some u and u', respectively.
We can then take the limit in the equations in the previous tweet to obtain that u is our desired solution!
(21/)
We can then take the limit in the equations in the previous tweet to obtain that u is our desired solution!
(21/)
Takeaways:
1. This is an ABSTRACT approach. We didn't need to know whether A is the Laplacian or some other operator. Just that it is MAXIMAL MONOTONE, which is a STATIONARY problem.
(22/)
1. This is an ABSTRACT approach. We didn't need to know whether A is the Laplacian or some other operator. Just that it is MAXIMAL MONOTONE, which is a STATIONARY problem.
(22/)
2. This gives us better regularity already since the convergences we get for the approximate solutions are strong (as compared to what you get in the Galerkin method)
(23/)
(23/)
The drawback is, A here is independent of time. There are certain problems where A would be time dependent. Examples would be solving problems in noncylindrical domains by mapping to a fixed domain.
Note that the Galerkin method does not need that A is time independent.
(24/)
Note that the Galerkin method does not need that A is time independent.
(24/)
Anyway, that's that! My reference for this is Brezis' book on functional analysis and Sobolev spaces.
Thank you for reading!
(25/25)
Thank you for reading!
(25/25)