renormalization group by conditional expectations...
TRANSCRIPT
Renormalization Group by ConditionalExpectations (Harmonic Extensions)
Hao Shen (Princeton University)
Harmonic analysis and Renormalization group Workshop
April 21, 2014
Gaussian measure
Measures on space of functions (distributions) {� : Rd ! R}
exp✓�1
2
ˆRd
(r�(x))2◆
D�.N
Gaussian measure + interactions
Measures on space of functions (distributions) {� : Rd ! R}
exp
�1
2
ˆRd
(r�(x))2 + �"
small
ˆRd
V (r�(x))!
D�.N
e.g.V (r�(x)) =
X
µ
cos(rµ�(x)) V (r�(x)) = �X
µ
(rµ�(x))4
Finite dimensional approximation: let ⇤ ⇢ Zd be a finitelattice,
exp
�1
2
X
x2⇤
(r�(x))2 + �X
x2⇤
V (r�(x))!Y
x2⇤
d�(x).N
The idea of renormalization group (RG)
“Coarse-graining”: let B be a block,
�(B) =1|B |X
x2B
�(x)
Integrating out the small scale details⇣(x) = �(x)� �(B)
F (�)D�! F 0(�)D�Iterate the above operations
F ! F 0 ! F 00 ! · · ·This is a transformation on the space of measures (models).
The study of measures =) Dynamical system
Heuristic statement of the result
At large scale,
exp✓�1
2
X(r�)2 + �
XV (r�)
◆
⇡ exp✓�1 + O(�)
2
X(r�)2
◆
Precise statement of the result
Recall the model on ⇤ = [�LN/2, LN/2]d \ Zd
exp
(�1
2
X
x2⇤
(r�(x))2 + �X
x2⇤
V (r�(x)))Y
x2⇤
d�(x)�
N
Squeeze ⇤ into a fixed (continuum) torus
⇤ ,! Td
Fix a mean zero smooth function f on Td .
Precise statement of the result
Moment generating functional (Laplace transform)
Z (f ) =⌦eP
x
f (x)�(x)↵
=
´e� 1
2P
⇤(r�)2+�P
⇤ V (r�)+P
x2⇤ f (x)�(x)Qx2⇤ d�(x)´
e� 12P
⇤(r�)2+�P
⇤ V (r�)Q
x2⇤ d�(x)
Here,f ( x
"point in ⇤
) := L�(d+2)N/2 f ( L�Nx"
point in Td
)
Precise statement of the result: scaling limit
Theorem
Assume that V (r�) = cos(r�), and f is sufficiently smooth,and |�| is sufficiently small. Then, there exists a constant �depending on � and
limN%1
Z (f ) = exp✓
12
ˆTd
f (x)(���)�1f (x)ddx◆
RG: If F (�) = e� 12 (r�)2+�V (r�) then
F ! F 0 ! F 00 ! · · · ! F ⇤
where F ⇤ is Gaussian.
Rigorous RG methods
The above idea of coarse-graining by “block averaging”was rigorously carried out Balaban, Gawedzki, Kupiainen (1980’s).Decomposition of Gaussian Abdessalem, Bauerschmidt, Brydges,
Dimock, Falco, Guadagni, Hurd, Slade, Yau etc. (1990’s): Let µC be theGaussian measure with covariance C = (��)�1,ˆ
F (�)dµC (�)
Decompose C into positive definite pieces C =P
j Cj s.t.
� =X
j
�j
Do integrations step by stepˆ· · ·ˆ
F (�) dµC1(�1)dµC2(�2) · · · dµCj
(�j) · · ·
The new method (conditional expectations)Write
´F (�)dµC (�) = E [F (�)]. Let ⌦ ⇢ ⇤,
EhF (�)
i= E
hE⇥F (�)
���(⌦c)⇤ i
where the conditional expectation is conditioned on all{�(x)|x /2 ⌦} being fixed.(Integrate out the field in ⌦, keeping the field out of ⌦ fixed.)Thinking of the Gaussian measure as e
12 (�,��),
�X
⇤
��� = Q(�(⌦)) + Q(�(⌦c)) + crossing term
= ” x2 + y 2 + xy ”
With ”y” fixed, shift ”x” to the minimizer of the quadraticform to cancel the crossing term.
The new method
The minimizer is P⌦� =P
y2@⌦ P⌦(·, y)�(y) i.e. theharmonic extension of � from ⌦c into ⌦,
� = P⌦�+ ⇣
where P⌦ is the Poisson kernel for domain ⌦. Then,
�X
x2⇤
�(x)��(x) = �X
x2X
⇣(x)�D⌦⇣(x)�
X
x2⇤
P⌦�(x)�P⌦�(x)
where �D⌦ is the Laplacian on ⌦ with Dirichlet b.c.
The conditional expectation = integrating out ⇣:
E⇥F (�)
���(⌦c)⇤= E⇣ [F (P⌦�+ ⇣)]
Covariance of ⇣ is the Dirichlet Green’s function of �� on ⌦.
The new method
Note that the functional
E⇣ [F (P⌦�+ ⇣)]
depends on � via P⌦�. Namely, the functional, after theintegration, depends on the field in a smoother way(smoothing effect of the Poisson kernel). Intuitively,
” P⌦ ⇡ the block averaging operator ”
The new method
The new method has the following advantages:the fluctuation field ⇣ automatically has finite rangesupport (In the block-average method, the fluctuation field
only has exponentially decaying correlation; in Brydges’
method, constructing the decomposition with finite range
properties is a nontrivial task.)
RG =) local elliptic PDE problems. (In the
non-translation-invariant models, the difficulty would be
merely estimating solutions to elliptic PDEs with non-constant
coefficients).
As we will see, the norm on the functionals will besimplified.
The a priori tuning of the GaussianBased on the above heuristics, we split the quadratic part:
�12(r�)2 =� �
2
X
x2⇤
(r�(x))2 + � � 12
X
x2⇤
(r�(x))2
�!�/p�
�����!� 12
X
x2⇤
(r�(x))2 + �
2
X
x2⇤
(r�(x))2
where � ⇡ 1 (will be determined in the end); � := 1 � ��1.
exp⇢�
2
X
x2⇤
(r�(x))2 + �X
x2⇤
W (r�(x))�
dµ(�)
dµ(�) = e� 12P
x2⇤(r�(x))2Y
x2⇤
d�(x)�N
An expansion over polymersTake f = 0 for simplicity, to focus on the main ideas of ourRG method. Let E be the Gaussian expectation.
Z = EY
x2⇤
e�2 (r�(x))2
✓�e�W (r�(x)) � 1| {z }
”O(�)”
�+ 1◆�
= EX
X✓⇤
I (⇤\X )K (X )
�
where for any sets X ,Y ,
I (Y ) =Y
x2Y
e�2 (r�(x))2 K (X ) =
Y
x2X
e�2 (r�(x))2
✓e�W (r�(x))�1
◆
Overview of the rest of the talk
EX
X✓⇤
I (⇤\X ,�)K (X ,�)
�I (Y ,�) =
Y
x2Y
e�2 (r�(x))2
|K (X ,�)| A�|X |eP
X
(r�)2 A � 1
Two questions:How to keep this algebraic structure at all the scales(sites ! blocks, � ! smoother fields)?How to maintain similar analytical bounds for all thescales?
Propagate algebraic structure to next scale
At scale j , one has Z =P
X2Pj
Ij(⇤\X ,�)Kj(X ,�)
Z = EX
X2Pj
Ij lives on the grey blocks, and Kj lives on the black regions.
Propagate algebraic structure to next scale
We would like to find a way to rewrite the expression onscale j + 1:
Z = E X
U2Pj+1
Ij+1(⇤\U)Kj+1(U)
�
while Ij+1,Kj+1 still have the above forms andfactorization properties.The idea is to postulate that Ij+1 will be exponential of anew quadratic form and then find Kj+1.
Propagate algebraic structure to next scale
E X
X2Pj
Ij(⇤\X )Kj(X )
�= E
X
X2Pj
Y
B2⇤\X
⇣Ij+1(B)+�Ij(B)
⌘Kj(X )
�
= E X
X2Pj
X
Y✓⇤\X
Ij+1(⇤\(X [ Y )) �Ij(Y ) Kj(X )
�
Consider the smallest j+1 scale polymer that contains X [ Y ,
written as U := X [ Y 2 Pj+1, and then resum:
E X
U2Pj+1
Ij+1(⇤\U)
K#j
(U)z }| {X
XqY=U
�Ij(Y )Kj(X )
�
Conditional expectation
E X
U2Pj+1
Ij+1(⇤\U)Y
V2c.c(U)
K#j (V )
�
= E X
U2Pj+1
Ij+1(⇤\U)
⇥Y
V2c.c(U)
E⇥K#
j (V )���(V+)c
⇤�
= E X
U2Pj+1
Ij+1(⇤\U)Kj+1(U)
�
Conditional expectation
Recall that “the conditional expectation = integration out ⇣”:
E⇥K (�)
���V c
⇤= E⇣ [K (PV�+ ⇣)]
where covariance of ⇣ is the Dirichlet Green’s function for V .After the integration, the resulting functional depends on � viaPV� which is expected to be “smoother” than �; however,
The Poisson kernel PU(x , y) for x 2 U, y 2 @U only hassmoothing effect for x sufficiently far from @U!If one directly took conditional expectation fixing the fieldoutside the “red line”, Ij+1 would also get involved in theintegration...
Conditional expectation (making a corridor)The idea to settle the above dilema is still “expand and resum”.
X
U2Pj+1
Y
B2Bj+1(⇤\U)
✓(Ij+1(B)� 1) + 1
◆K 0
j+1(U)
=X
U2Pj+1
X
D✓⇤\U
Y
B2Bj+1(D)
(Ij+1(B)� 1)K 0j+1(U)
Then we “glue” the Ij+1(B)� 1 which touch U onto K 0j+1
until the set U [ ([B) is surrounded by a “corridor” filledby only “1” ’s.After that, we resum up all the rest of Ij+1(B)� 1 out ofthe corridor, and we obtain
Z = E X
U2Pj+1
Ij+1(⇤\U)Kj+1(U)
�
where U is a set slightly larger than U.
Propagate analytical bound to next scale
We want:Kj(X ) . A�|X |
j 8X 2 Pj
) Kj+1(U) . A�|U|j+1 8U 2 Pj+1
However,
Kj+1(U) ⇡ EhX
B=U
�Ij(B) +X
X=U
Kj(X )��� · · ·
i
So many finer scale polymer X ,Y ’s are contained in afixed coarse scale polymer U!In fact, only O(Ld) terms...
The important scalingHow to deal with this O(Ld) growth of K? Let’s look at thecovariance of rPX�:Lemma: Let x 2 X , and C be the covariance of the Gaussianfield �. If d(x , @X ) � 1
3Lj , then
X
y1,y22@X
(rxPX )(x , y1)C (y1, y2)(rxPX )(x , y2) O(1)L�dj
This means that rPX� “scales down by O(L� d
2 )”.
” K = c0 + c2(rPX�)2 + O((rPX�)
4) ”
As long as we choose
Ij+1 = eEj+1+�
j+1(rPU
�)2
in the way that the c0 + c2(rPX�)2 part is absorbed into Ij+1,then O((rPX�)4) will scale down by O(L�2d).
The important scaling
Lemma: Let x 2 X , and C be the covariance of the Gaussianfield �. If d(x , @X ) � 1
3Lj , then
X
y1,y22@X
(rxPX )(x , y1)C (y1, y2)(rxPX )(x , y2) O(1)L�dj
Idea of proof:
LHS =X
y22@X
rxC (x , y2)| {z }.|x�y2|1�d=O(L�(d�1)j )
L�j
z}|{rx PX (x , y2)| {z }
O(L�(d�1)j )
Propagate analytical bound to next scale
Again, we want:
Kj(X ,�) . eP
X
(r�)2 8�
) Kj+1(U,�0) . eP
X
(r�0)2 8�0
In the field decomposition method,
E⇣eP
X
(r�)2 = E⇣eP
X
(r�0+r⇣)2 E⇣e2
PX
⇣(r�0)2+(r⇣)2
⌘
Bounds deteriorate (lose control) ... One has to put inmany extra terms.
The norm for K
The weight in “the method of decomposition of Gaussian field”was roughly
G (X ) ⇠ eP
X
(r�)2
but complete form is complicated
G (X ) = exp✓
c1X
X
(r�j+1)2 + c2
X
B2Bj
(X )
supB?
��L2jr2�j+1��2
+ c3X
@X
(r�j+1)2 + c4 max
0p2supB?
��Lpjrp⇣��2◆
where �j+1 = �j + ⇣.
Propagate analytical bound to next scale
We bound Kj(X ,�) . G (X ,X+) where
G (X ,Y ) := Ehe
2P
X
(r�)2���Y c
i �N(X ,Y )
for X ⇢ Y where
N(X ,Y ) := Ehe
2P
X
(r�)2���Y c = 0
i
An interesting property
exp
2
X
X
(@ 2)2
! G (X ,Y ) exp
2
X
X
(@ 1)2
!
where1 1 minimizes
PY (@�)2 �
PX (@�)
2with �Y c
fixed,
2and 2 minimizes
PY (@�)2 with �Y c
fixed.
Propagate analytical bound to next scale
The conditional expectation automatically takes care of theintegration of G :
E⇥G (X ,Y )
���Uc
⇤
=EhEhe
2P
X
(r�)2���Y c
i ���Uc
i �N(X ,Y )
=Ehe
2P
X
(r�)2���Uc
i �N(X ,Y )
=G (X ,U)N(X ,U)
N(X ,Y )| {z }cL
�dj |X |
Linearization of RG map
The map (�j , �j+1,Kj) ! Kj+1 is smooth (w.r.t. the norms wedefined).The linearized part of Kj+1(U) at (0, 0, 0) contains three parts:
The contributions from Kj(X ) where X ✓ U is “large” -which can be shown negaligible.For small X , the Taylor remainder after the second orderTaylor expansion of E
⇥Kj(X )
���(U+)c⇤
is negaligible.One can choose �j+1 so that the leading Taylor terms ofE⇥Kj(X )
���(U+)c⇤
are absorbed into Ij+1.
Determine �
The dynamical system
�j+1 = �j + ↵(Kj)
Kj+1 = LKj + f (�j ,Kj)(1)
satisfies kLk < 1. By standard stable manifold theorem, thereexists an initial tuning � so that |�j | ! 0, kKjkj ! 0.In other words,
Ehe
�2P
x2⇤(r�(x))2+zW (r�)i! ”econstE [1] ” (j ! 1)
Thank you!