tehnici de optimizare laborator to141.85.225.150/courses/slidestolab.pdf · aplicatia cmmp - identi...
TRANSCRIPT
![Page 1: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/1.jpg)
Tehnici de OptimizareLaborator TO
Ion Necoara
2019
![Page 2: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/2.jpg)
Pachete software disponibile pentru optimizare
I softwar gratis: CVX (rezolva numai probleme de optimizareconvexa), www.stanford.edu/ boyd/cvx
I software comerciale:I Matlab (contine un pachet de optimizare). Cele mai
importante functii de optimizare din matlab sunt:
linprog, quadprog, fminunc, fmincon
(utilizati help linprog pentru detalii)I CPLEX si mai recent GurobiI etc...
![Page 3: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/3.jpg)
Probleme de optimizare - NLP
Reamintim problema de optimizare neliniara (NLP):
(NLP) : minx∈Rn
f (x)
s.l.: g(x) ≤ 0, h(x) = 0,
unde functiile vectoriale g = [g1 · · · gm] si h = [h1 · · · hp].Exemplu:
minx∈R3
x61 + x4
2 + x3
s.l.: 5x1 + 6x2 ≤ 0, x1 + x3 = 4, x1 + x2 = −2
In acest caz avem:f : R3 → R, f (x) = x6
1 + x42 + x3
g : R3 → R, g(x) = 5x1 + 6x2
h : R3 → R2, h(x) =
[x1 + x3 − 4x1 + x2 + 2
]
![Page 4: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/4.jpg)
NLP - implementare MatlabPentru problema NLP unidimensionala:
minx∈R
f (x)
se foloseste functia din Matlab fminbnd, avand sintaxa:
x = fminbnd(objfun, xl, xu, optiuni)
unde objfun reprezinta functia obiectiv ce trebuie furnizata ca ovariabila de tip function handle; xl, xu ∈ R reprezinta intervalul decautare al minimului; optiuni reprezinta setul de optiuni specificfiecarei rutine MATLAB.Exemplu: fie problema de optimizare unidimensionala:
minx∈R
f (x)(= ex(4x2 + 2x)
)Rulati urmatorul cod in matlab:objfun = @(x) exp(x) ∗
(4 ∗ x2 + 2 ∗ x
);
xl = −1; xu = 1;optiuni = optimset(’Display’,’off’);
[xoptim, foptim] = fminbnd(objfun, xl, xu, optiuni)
![Page 5: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/5.jpg)
NLP - implementare MatlabPentru problema generala NLP fara constrangeri:
minx∈Rn
f (x)
se foloseste functia din Matlab fminunc, cu sintaxa:
x = fminunc(objfun, x0, optiuni)
unde objfun reprezinta functia obiectiv ce trebuie furnizata ca ovariabila de tip function handle; x0 ∈ Rn reprezinta punctul initialde unde porneste procesul de cautare al minimului; optiunireprezinta setul de optiuni specific fiecarei rutine MATLAB.Exemplu: fie problema de optimizare fara constrangeri
minx∈R2
f (x)(= ex1(4x2
1 + 2x22 + 4x1x2)
)Rulati codul matlab:objfun = @(x) exp(x(1))∗
(4 ∗ x(1)2 + 2 ∗ x(2)2 + 4 ∗ x(1) ∗ x(2)
);
x0 = [−1 1]T;optiuni = optimset(’fminunc’);
[xoptim, foptim] = fminunc(objfun, x0, optiuni)
![Page 6: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/6.jpg)
Metoda gradient
Pentru problema NLP fara constrangeri:
minx∈Rn
f (x)
putem folosi metoda gradient, cu iteratia:
xk+1 = xk − αk∇f (xk)
unde
I ∇f (xk) este gradientul lui f in punctul xkI αk este pasul care se poate alege de exemplu constantαk = 1/L (constanta Lipschitz a gradientului) sau ideal:
αk = arg min0≤α≤1
F (α) (= f (xk − α∇f (xk)))
![Page 7: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/7.jpg)
Program Matlab - metoda gradient cu pas idealAlgoritmul MG. (Se da punctul de start x0 si acuratetea ε. Secalculeaza o ε-solutie optima pentru problema de optimizareminx f (x) (= 10x6
1 + 30x62 + x2
1 + 50x22 ) cu MG-ideal.)
0. function [·] = MG-ideal(x0, ε)
1. obj = @(x) 10 ∗ x(1)6 + 30 ∗ x(2)6 + x(1)2 + 50 ∗ x(2)2
2. grad = @(x) [60 ∗ x(1)5 + 2 ∗ x(1); 180 ∗ x(2)5 + 100 ∗ x(2)]
3. x = x0, tg = x0
4. while(norm(grad(x)) > ε)
1. obj α = @(α) obj(x − α ∗ grad(x))
2. α star = fminbnd(obj α, 0, 1)
3. x = x − α star ∗ grad(x); tg = [tg x ]
5. end while
6. x = −0.2 : 0.1 : 0.2; y = −0.2 : 0.1 : 0.2; [X ,Y ] = meshgrid(x , y);
7. Z = 10 ∗ X .6 + 30 ∗ Y .6 + X .2 + 50 ∗ Y .2;
8. figure; plot(tg(1, :), tg(2, :)); hold on; contour(X ,Y ,Z );
![Page 8: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/8.jpg)
Program Matlab - metoda gradient cu pas ideal
-0.1
-0.05
0
0.05
0.1
0.15
0.2
![Page 9: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/9.jpg)
Metoda Newton
Pentru problema NLP fara constrangeri:
minx∈Rn
f (x)
putem folosi metoda Newton, cu iteratia:
xk+1 = xk − αk(∇2f (xk))−1∇f (xk)
unde
I ∇f (xk)/∇2f (xk) este gradientul/Hessiana lui f in punctul xkI αk este pasul care se poate alege de exemplu constant αk = 1
sau ideal:
αk = arg minα>0
F (α) (= f (xk − α(∇2f (xk))−1∇f (xk)))
![Page 10: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/10.jpg)
Metoda Newton versus metoda gradient
minx∈R2
f (x) (= 10x61 + 30x6
2 + x21 + 50x2
2 )
function [] = gradient-Newton-ideal(x0,eps)obj = @(x) 10x6
1 + 30x62 + x2
1 + 50x22;
gradient = @(x) [60x51 + 2x1 ; 180x5
2 + 100x2];hessiana = @(x) [300x4
1 + 2 0; 0 900x42 + 100];
%% Metoda Gradient cu pas ideal
x = x0;trajectory g = x0;while (norm(gradient(x))>eps)
grad = gradient(x);
obj α = @(α) obj(x - α*grad);α star = fminbnd(obj α, 0, 1);
x = x - α star∗grad;trajectory g = [trajectory g x ];
end
![Page 11: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/11.jpg)
Metoda Newton versus metoda gradient
%% Metoda Newton cu pas ideal
x=x0;
trajectory n = x0;
while (norm(gradient(x))>eps)grad = gradient(x);
hess = hessiana(x);
d newton = inv(hess) * grad;
obj α = @(α) obj(x - α * d newton);
α star = fminbnd(obj α, 0, 1);
x = x - α star * d newton;
trajectory n = [ trajectory n x];
end
![Page 12: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/12.jpg)
Metoda Newton versus metoda gradient
x = −0.2 : 0.1 : 0.2;y = −0.2 : 0.1 : 0.2;[X,Y] = meshgrid(x,y);
Z = 10*X.^6 + 30*Y.^6 + X.^2 + 50*Y.^2;
figure
plot(trajectory g(1,:),trajectory g(2,:),’r+-’,’LineWidth’,3);hold on
plot(trajectory n(1,:),trajectory n(2,:),’k*--’,’LineWidth’,3);legend(’Gradient Method’,’Newton Method’);
hold on
contour(X,Y,Z,’ShowText’,’on’,’LineWidth’,2);
end
![Page 13: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/13.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Newton cu pas ideal
k=1
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 14: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/14.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Newton cu pas ideal
k=2
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 15: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/15.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=1
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 16: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/16.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=2
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 17: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/17.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=3
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 18: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/18.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=4
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 19: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/19.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=5
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 20: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/20.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=6
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 21: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/21.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=7
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 22: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/22.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=8
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 23: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/23.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=9
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 24: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/24.jpg)
Metoda Newton versus metoda gradient
Comparatie intre metoda Newton si metoda gradient, ambele cupas ideal, pentru f : R2 → R, f (x) = 10x6
1 +30x62 +x2
1 +50x22
Gradient cu pas ideal
k=10
−0.2
−0.15
−0.1
−0.05
0
0.05
0.1
0.15
0.2
−0.2−0.100.10.2
![Page 25: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/25.jpg)
Metode Gauss-Newton pentru CMMPSistemele de ecuatii neliniare F (x) = 0, unde F : Rn → Rm, se potformula ca o problema CMMP neliniara:
minx∈Rn
f (x)
(=
1
2‖F (x)‖2
),
pentru care metodele cel mai des utilizate sunt:I metoda Gauss-Newton
xk+1 = arg minx∈Rn
1
2‖F (xk) +
∂F
∂x(xk)(x − xk)︸ ︷︷ ︸
liniarizarea lui F (x) in xk
‖2,
I metoda Levenberg-Marquardt
xk+1 = arg minx∈Rn
1
2‖F (xk) +
∂F
∂x(xk)(x − xk)‖2 +
βk2‖x − xk‖2.
In mod compact putem scrie (Fk = F (xk), Jk = ∂F/∂x(xk)):
xk+1 = xk − αk
(JTk Jk + βk In
)−1JTk Fk
![Page 26: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/26.jpg)
Aplicatia CMMP - identificare parametriAvem un arc cu masa ms , constanta arcului D, si constanta dedamping γ. Pentru a determina valorile acestor 3 parametrimasuram de 10 ori perioada T de oscilatie fixand un capat sipunand obiecte de mase differite la celalalt capat. Daca masa meste atasata, perioada de timp este data de:
T =2π√D
m + 13ms
− γ2
Zece masuratori sunt date in tabel (m[kg], T[s]):
m 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
T 0.2 0.284 0.343 0.4 0.446 0.49 0.53 0.565 0.6 0.63
Problema cmmp neliniara pentru a gasi x = (ms ,D, γ):
minimizex∈R3
‖F (x)‖22
![Page 27: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/27.jpg)
Aplicatia CMMP - identificare parametri cont.
1. Vom scrie o functie matlab pentru a calcula JacobianulJ(x) = ∇F (x) unei functii F utilizand “i-trick” (usor deiplementat in matlab, acuratete buna):
J(x)Td = R(F (x + itd)− F (x)
it
)2. Implementam metoda Levenberg-Marquard:
xk+1 = xk −(J(xk)T J(xk) + βIn
)−1J(xk)TF (xk).
Folosim toleranta TOL = 10−6 pentru criteriul de oprire siconstant β = 10−5. Initializam cu x0 dat de ms = 0.5 [kg],
D = 500[
kgs2
], and γ = 0.5
[1s
].
![Page 28: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/28.jpg)
Identificare parametri - matlab
Codul pentru functia F (x):
function F = my function(x)ms = x(1); D = x(2); gamma = x(3);m = [1 : 10];T = [0.205 0.284 0.343 0.401 0.446 0.491 0.530 0.565 0.603 0.633];for i = 1 : 10h(i) = 2 ∗ pi/sqrt(D/(m(i) + ms/3)− gamma ∗ gamma);end
F = h − T;
end
![Page 29: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/29.jpg)
Identificare parametri - matlab
Codul pentru functia J(x) (Jacobianul) folosind i-trick:
J(x)Td = R(F (x + itd)− F (x)
it
)function [F , J] = jacobian(f , x)F = feval(f , x); % evaluam f in x
N = length(x); % the dimension of x
t = 1e − 100i; % the imaginary variation
d = eye(N); % collection of unit vectors
for j = 1 : NJ(:, j) = real((feval(f , x + t ∗ d(:, j))− F )/t);end
end
![Page 30: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/30.jpg)
Identificare parametri - matlabCodul pentru metoda Levenberg-Marquard:
xk+1 = xk −(J(xk)T J(xk) + βIn
)−1J(xk)TF (xk).
function [] = gauss newton methodx = [0.5; 500; 0.5]; dim = 3;TOL = 1e−6; beta = 1e−5;maxStep = 20; % max. nr. iter.
format long e; % printing format
[F , J] = jacobian(my function, x);iter = 1;while i <= maxStepx = x − inv(JT ∗ J + beta ∗ eye(dim)) ∗ JT ∗ F;norm gradient(i) = norm(JT ∗ F );if norm gradient(i) < TOLbreak;
end
[F , J] = jacobian(my function, x);iter = iter + 1; end
![Page 31: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/31.jpg)
Identificare parametri - rezultate (plot)
nr iter: 7
ms = 1.952649076905174e-01
D= 1.014432359156959e+03
gamma=1.634338770243429e+00
![Page 32: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/32.jpg)
Fmincon din matlab pentru NLP
Fmincon din matlab implementeaza metoda de punct interior(Newton) pt NLP:
minx∈Rn
f (x)
s.t.: Ax ≤ b, Aeqx = beq
c(x) ≤ 0, ceq(x) = 0, lb ≤ x ≤ ub
unde c(x), ceq(x) sunt functii care returneaza vectori, reprezentandconstrangerile neliniare. Functia fmincon are sintaxa:
x = fmincon(fun,x0,A,b,Aeq,beq,lb,ub,nonlcon,optiuni)
unde parametrul de intrare nonlcon este tot un obiect
de tip function handle ca si fun.
Exemplu: Fie functia f (x) = ex1(4x21 + 2x2
2 + 4x1x2 + 2x2 + 1) siconstrangerile x1x2 − x1 − x2 + 1.5 ≤ 0 si −x1x2 − 10 ≤ 0.
![Page 33: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/33.jpg)
Fmincon - implementare matlab
I Scriem un M-fisier confun.m pentru constrangeri:
function [c,ceq]=confun(x)
c=[1.5+x(1)*x(2)-x(1)-x(2); -x(1)*x(2)-10];
ceq=[];
I Scriem un M-fisier objfun.m pentru functia obiectiv:
function f=objfun(x)
f = exp(x(1)) ∗ (4 ∗ x(1)2 + 2 ∗ x(2)2 + 4 ∗ x(1) ∗ x(2) + 2 ∗ x(2) + 1);
I Consideram punctul de initializare x0 = (1, −1).
I Apelam fmincon:
x0=[-1,1];
opt=optimset(’fmincon’);
opt.LargeScale=’off’;
[x,fval]=fmincon(@objfun,x0,[],...,[],@confun,opt)
![Page 34: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/34.jpg)
Metoda gradient proiectat
(NLP) : f ∗ = minx∈X
f (x)
unde multimea X este convexa si usor de proiectat pe ea(semispatiu, hiperplan, ball):
[y ]aT x≤b = y − max(aT y − b, 0)
‖a‖2a
Metoda Gradient Proiectat:
xk+1 = [xk − αk∇f (xk)]X
Pasul αk se alege constant 1/L sau cu metoda ideala!
![Page 35: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/35.jpg)
Gradient Proiectat - matlab pt. X = aTx ≤ bfunction [] = MGP constant()
f = @(x)x(1)4 + 2 ∗ x(1)2 ∗ x(2) + 2 ∗ x(2)2 − x(2) + 3;gradient = @(x)[4∗x(1)3 +4∗x(1)∗x(2); 2∗x(1)2 +4∗x(2)−1];x min = [0; 0.25];
a = [1; 5]; b=0.5; %% definitie semispatiu
proiectie = @(x) [x(1) - max(0,a’*x-b)/26 ;...
x(2) - 5*(max(0,a’*x-b)/26)];x = [−0.9; 1.05]; xvechi= [0; 0];traiect = [x ]; alpha=0.07; %% pasul iteratiei
while (norm(x-xvechi) > 0.00001) %% criteriu oprire
%% iteratie cu pas constant
xvechi=x;
z = x - (alpha*gradient(x));
x = proiectie(z);
traiect= [ traiect x];
end
![Page 36: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/36.jpg)
Gradient Proiectat - matlab pt. X = aTx ≤ b%%% Initializare contur
x1=linspace(-1+x min(1),1+x min(1),20);
y1=linspace(-1+x min(2),1+x min(2),20);
[X ,Y ]=meshgrid(x1,y1);fvec=X.^4+2*(X.^2).*Y + 2*(Y.^2)-Y+3*ones(size(X));
V=[7 6 5 4 3 2 1.5 1 0.7 ];
halfx = (b - a(2)*y1)/a(1);
fig=figure;
hold on
contour(X ,Y , fvec ,V )hold on
plot(x min(1),x min(2),’-x’); %% indica minim global
hold on
plot(halfx,y1,’g-x’);
hold on
plot(traiect(1,:),traiect(2,:),’r-x’,’LineWidth’,3)
![Page 37: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/37.jpg)
Metoda Gradient Proiectat - plot
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
1.2
![Page 38: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/38.jpg)
Metoda bariera pentru CP
minx∈Rn
f (x) s.l. g(x) ≤ 0, Ax = b.
unde f , g sunt functii convexe. Formulam problema bariera:
minx∈Rn
f (x)− τm∑i=1
log(−gi (x)) s.l. Ax = b.
Alegem x0 strict fezabil, τ0 > 0, σ < 1 si ε > 0.Cat timp mτk ≥ ε repeta:
1. Calculeaza xk+1 = x(τk) pornind din punctul initial xk (”warmstart”);
2. Descreste parametrul τk+1 = στk .
⇒ Pentru a determina xk+1 rezolvam (e.g. prin Metoda Newton)reformularea cu bariera logaritmica cu parametrul τk , pornind dinpunctul initial xk ;
⇒ Dupa k iteratii avem f (xk)− f ∗ ≤ mτ0σk .
![Page 39: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/39.jpg)
Metoda Bariera - implementare matlabfunction [] = Metoda Bariera()
f = @(x) ex(1)+3∗x(2)−0.1 + ex(1)−3∗x(2)−0.1 + e−x(1)−0.1
gradient = @(x) [ex(1)+3∗x(2)−0.1+ex(1)−3∗x(2)−0.1−e−x(1)−0.1; 3∗ex(1)+3∗x(2)−0.1 − 3 ∗ ex(1)−3∗x(2)−0.1]hessiana = @(x)
[ex(1)+3∗x(2)−0.1+ex(1)−3∗x(2)−0.1+e−x(1)−0.1 3∗ex(1)+3∗x(2)−0.1−3 ∗ ex(1)−3∗x(2)−0.1; 3 ∗ ex(1)+3∗x(2)−0.1 − 3 ∗ ex(1)−3∗x(2)−0.1 9 ∗ex(1)+3∗x(2)−0.1 + 9 ∗ ex(1)−3∗x(2)−0.1]
F = @(x,tau) f(x) - tau*log(x(1)-x(2))
grad F = @(x,tau) gradient(x)
- tau*[1/(x(1)-x(2)); -1/(x(1)-x(2))];
hess F = @(x,tau) hessiana(x)
- tau*[-1/((x(1)-x(2))^2) 1/((x(1)-x(2))^2); ...
1/((x(1)-x(2))^2) -1/((x(1)-x(2))^2) ];
a = [-1; 1]; b=0; %% definitie semispatiu
x = [−0.6;−0.9];; x min = [0; 0.25]; traiect=[x];
![Page 40: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/40.jpg)
Metoda Bariera - implementare matlab cont.
while (tau > 0.0001) %% criteriu oprire
%% Metoda Newton interioara
while (norm(grad F(x,tau))>0.0000001)
H=hess F(x,tau);
g=grad F(x,tau);
x=x-inv(H)*g;
end
tau=0.6*tau;
traiect=[traiect x];
end
![Page 41: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/41.jpg)
Metoda Bariera - rezultate (plots)%%% Initializare contur
x1=linspace(-1.2+x min(1),1.2+x min(1),20);
y1=linspace(-1.2+x min(2),1.2+x min(2),20);
[X ,Y ]=meshgrid(x1,y1);fvec=exp(X+3*Y-0.1*eye(size(X)))
+exp(X-3*Y-0.1*eye(size(X))) + exp(-X-0.1*eye(size(X)));
V=[20 13 7 6 5 4 3 2 1]; halfx = (b - a(2)*y1)/a(1);
fig=figure;
hold on
contour(X ,Y , fvec ,V )hold on
plot(x min(1),x min(2),’-x’); %% indica minim global
hold on
plot(halfx,y1,’-x’);
%% iteratii pas constant
hold on
plot(traiect(1,:),traiect(2,:),’r-x’,’LineWidth’,3)
![Page 42: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/42.jpg)
Metoda Bariera -plot
-0.5
0
0.5
1
![Page 43: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/43.jpg)
Probleme de optimizare - LPDaca f (x), g(x) si h(x) din problema NLP sunt afine (f (x) = cT x ,g(x) = Cx − d , h(x) = Ax − b), atunci problema NLP devine oproblema de programare liniara (LP sau linear programming):
(LP) : minx∈Rn
cT x
s.l.: Cx ≤ d , Ax = b.
I LP-urile pot fi rezolvate eficient, cu teorie dezvoltata incepandcu anii 1940 (metoda simplex). Astazi se pot rezolvaprobleme LP cu 109 variabile!
Aplicatii in toate domeniile: economie (optimizare de portfoliu);assigment (companie aeriana numeste echipajul fiecarui zbor a.i.:fiecare zbor este acoperit, indeplinite regulamentele - fiecare pilotzboara un nr maxim de ore/zi, cost minim - hotel pt. echipaj);productie si transport (companie petroliera: petrol in Arabia,rafinarie in Romania si clienti in SUA); telecomunicatii (multetelefoane din Cluj la Bucuresti si Iasi la Timisoara - cum facemrout-rea optima in retea); problema comis-voiajorului; ...
![Page 44: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/44.jpg)
Probleme de optimizare - LPExemplu de problema LP:
minx∈R3
3x1 + 2x2 + 4x3 (1)
s.l.: 0 ≤ x1 ≤ 10, x2 ≥ 0, x3 ≥ 0
x1 + x2 = 2
x2 + x3 = 1
Observam: f (x) = 3x1 + 2x2 + 4x3 =[3 2 4
]︸ ︷︷ ︸cT
x1
x2
x3
︸ ︷︷ ︸
x
= cT x
Constrangeri de egalitate: h(x) : R3 → R2 h(x) = Ax − b = 0
[1 1 00 1 1
]︸ ︷︷ ︸
A
x1
x2
x3
︸ ︷︷ ︸
x
−[
21
]︸︷︷︸b
= 0
![Page 45: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/45.jpg)
Probleme de optimizare - LP
Constrangeri de inegalitate: g(x) : R3 → R4, g(x) = Cx − d ≤ 0
Inegalitatile 0 ≤ x1 ≤ 10, x2 ≥ 0, x3 ≥ 0 pot fi scrise drept:1 0 0−1 0 00 −1 00 0 −1
︸ ︷︷ ︸
C
x1
x2
x3
︸ ︷︷ ︸
x
−
10000
︸ ︷︷ ︸
d
≤ 0
Astfel, problema (1) poate fi scrisa ca problema LP standard.Pentru rezolvarea LP-urilor se foloseste functia din Matlab linprog.
x = linprog(c,C,d,A,b)
![Page 46: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/46.jpg)
Probleme de optimizare - QPDaca g(x) si h(x) din problema NLP sunt afine iar functia obiectivf (x) este o functie patratica, atunci problema NLP devine oproblema de programare patratica (QP sau quadraticprogramming):
(QP) : minx∈Rn
1
2xTQx + qT x + r
s.l.: Cx ≤ d , Ax = b.
Conditii de convexitate pentru functii de doua ori continuudiferentiabile: o functie este convexa daca ∇2f (x) 0. Pentru oproblema QP, daca Q = QT , atunci ∇2f (x) = Q. In acest caz,functia obiectiv este convexa daca Q 0.
QP-urile se rezolva cu metode punct interior, “active set” sau detip gradient: astazi se pot rezolva probleme de dimensiuni foartemari (109 variabile) ce apar in multe aplicatii: optimizareamotoarelor de cautare, procesarea de imagine, predictiemeteo (vezi pagina mea http://acse.pub.ro/person/ion-necoara)
![Page 47: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/47.jpg)
Probleme de optimizare - QPExemple de probleme QP:
minx∈Rn
x21 + x2
2 + x1x2 + 3x1 + 2x2
s.l.: x ≥ 0, x1 + x2 = 5
Observam
f (x) = x21 + x2
2 + x1x2 + 3x1 + 2x2
=1
2
[x1 x2
] [2 11 2
]︸ ︷︷ ︸
Q
[x1
x2
]+[3 2
]︸ ︷︷ ︸qT
[x1
x2
]
Constrangeri de inegalitate: −I2x ≤ 0⇒ C = −I2 & d = 0Constrangeri de egalitate:
[1 1
]x = 5⇒ A =
[1 1
]& b = 5
Pentru rezolvarea QP-urilor se foloseste functia din Matlabquadprog.
x = quadprog(Q,q,C,d,A,b)
![Page 48: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/48.jpg)
Probleme de optimizare - CPProbleme de optimizare convexa:
(CP) : minx∈Rn
f (x)
s.l.: g(x) ≤ 0, Ax − b = 0,
O problema de optimizare este convexa daca functia obiectiv f (x)si multimea fezabila sunt convexe.
Daca g(x) convexa si h(x) afina (i.e. h(x) = Ax − b), atuncimultimea fezabila X = x : g(x) ≤ 0,Ax = b este convexa.
• Observatie: problemele LP si problemele QP (cu Q 0) suntprobleme de optimizare convexa. Problemele convexe sunt, ingeneral, mai usor de rezolvat decat (NLP)-urile. Foarte multeaplicatii: procesare de semnal, control optimal, design optim
• Observatie: o problema de optimizare convexa undef (x) = 1
2xTQx + qT x si gi (x) = 1
2xTQix + qTi x + ri , i.e. f si
functiile din constrangerile de inegalitate sunt patratice, senumeste QCQP(quadratically constrained quadratic program)
![Page 49: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/49.jpg)
Probleme CP - CVXPentru problemele de optimizare convexe:
(CP) : minx∈Rn
f (x)
s.l.: g(x) ≤ 0, Ax − b = 0,unde f si g sunt functii convexe, cel mai bun pachet softwaredisponibil si gratis este CVX (in Matlab avem functia fmincon).
1. Downloadeaza CVX de pe: www.stanford.edu/boyd/cvx
2. Salvati-l intr-un director local. Porniti Matlab si adaugati pathcatre acest folder. Rulati comanda: ”cvx setup”.
3. Exemplu: minx≥1 ‖Ax − b‖.Luati o matrice A de dimensiune m× n (ex: A = randn(m, n))si un vector b in Matlab. Rulati in matlab urmatorul cod:
cvx begin
variable x(n)minimize(norm(A ∗ x − b))subject to
x >= 1cvx end
![Page 50: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/50.jpg)
Probleme de optimizare - SDPProblema SDP (semidefinite programming): probleme convexeunde multimea fezabila este descrisa de LMI-uri.Aceste probleme se numesc SDP-uri deoarece se impune caanumite matrice sa ramana pozitiv definite. O problema generalaSDP poate fi formulata drept:
(SDP) : minx∈Rn
cT x
s.l.: A0 +n∑
i=1
Aixi40, Ax − b = 0,
unde matricele Ai ∈ Sm oricare ar fi i = 0, . . . , n. Remarcam caproblemele LP, QP, si QCQP pot fi de asemenea formulate caprobleme SDP. Foarte multe aplicatii: control optimal si robust,stabilitatea sistemelor dinamice, probleme de localizare,matrix completion, ...
Problemele convexe sunt mult mai usor de rezolvat decat celeneconvexe! (software gratuit de optimizare bun pentru problemeconvexe este CVX)
![Page 51: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/51.jpg)
Aplicatia stabilitatea sistemelor - CVXProbleme SDP convexe de forma
(SDP) : minx∈Rn
cT x , s.l.: LMI (x)40, Ax − b = 0,
pot fi rezolvate cu CVX. Consideram urmatorul exemplu: fie unsistem dinamic x = Ax si dorim sa investigam daca el este stabil⇔ exista matrice X simetrica a.i.: ATX + XA ≺ 0, X 0. Cele 2inegalitati stricte sunt omogene in X , deci problema poate fiformulata echivalent ca si:
A′X + XA + In40, X In.
Rulati in matlab urmatorul cod:
% A-eigenvalues uniform logarithmic spaced [−10−1,−101]A=diag(-logspace(-0.5,1,n)); U=orth(randn(n,n)); A=U’*A*U;
cvx begin sdp
variable X (n, n) symmetric %Obs: diagonal,...
minimize(trace(X )) %Obs: poate lipsi aceasta linie
ATX + XA + eye(n) <= 0, X >= eye(n)cvx end
![Page 52: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/52.jpg)
Aplicatia lant suspendat
Lant suspendat: un lant in planul x − z constand din n + 1puncte de masa egala conectate prin arcuri are potentialul:
V =D
2
n∑i=1
‖xi − xi+1‖2 +n+1∑i=1
mgeTz xi ,
unde x1, . . . , xn+1 ∈ R2 sunt pozitiile punctelor, g = 9.81constanta gravitationala, m = 1 masa unui punct, ez = [0 1]T siD = 1500 constanta arcului. Punctul de index 1 este fixat la(−1, 1) si punctul xn+1 este la (2, 2) ∈ R2. Daca constrangereaeTz xi ≥ 0 este activa, atunci punctul i atinge pamantul. Dacalantul este la echilibru, potentialul V este la minimum.
I Daca n = 49, calculati numarul de puncte care atingpamantul. (Este suficient sa plotati punctele si sa leenumarati pe cele care au xi (2) = 0.)
![Page 53: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/53.jpg)
Aplicatia lant suspendat cont.Problema se formuleaza astfel:
minimizex=(x1,...,xn+1)∈R2(n+1)
V (x) :=D
2
n∑i=1
‖xi − xi+1‖2 +n+1∑i=1
mgeTz xi
subject to x1 = [−1 1]T , xn+1 = [2 2]T
xi (2) ≥ 0 for all i ∈ 1, . . . , n + 1Aceata problema este un QP convex. De fapt, putem eliminaconstrangerile x1 = [−1 1]T , xn+1 = [2 2]T , variabila de deciziedevine x = (x2, . . . , xn) cu functia obiectiv (notatie: xi = [ai bi ]
T ):
V (x) =D
2
((−1− a2)2 +
n−1∑i=2
(ai − ai+1)2 + (an − 2)2
)︸ ︷︷ ︸
:=f1(a)
+D
2
((1− b2)2 +
n−1∑i=2
(bi − bi+1)2 + (bn − 2)2
)+
n∑i=2
mgbi︸ ︷︷ ︸:=f2(b)
![Page 54: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/54.jpg)
Aplicatia lant suspendat cont.Problema se reformuleaza astfel:
mina∈Rn−1,b∈Rn−1
f1(a) + f2(b) s.t. : b ≥ 0,
unde functiile fi sunt patratice:
f1(a) = 1/2aTQ1a + qT1 a & f2(b) = 1/2bTQ2b + qT2 b
ambele avand aceeasi Hessiana
Q1 = Q2 = D
2 −1−1 2 −1
−1 2 −1. . .
. . .. . .
−1 2 −1−1 2
q1 = [D 0 · · · 0 − 2D]T , q2 = [−D 0 · · · 0 − 2D]T + mg1n−1.
![Page 55: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/55.jpg)
Calcularea solutiei - quadprog
Problema QP convexa:
mina∈Rn−1,b∈Rn−1
(1/2aTQa + qT1 a) + (1/2bTQb + qT2 b) s.l. : b ≥ 0.
este de forma
(QP) : minx=(a,b)∈R2(n−1)
1
2xTQx + qT x s.l.: Cx ≤ d .
Putem sa o rezolvam apeland functia quadprog din Matlab:
Q = diag(Q,Q)
q = [D zeros(1, n − 3) − 2D − D + mg mg · · ·mg − 2D + mg ]T
C = [zeros(n − 1, n − 1) − eye(n − 1)], d = 0
x = quadprog(Q,q,C,d)
![Page 56: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/56.jpg)
Calcularea solutiei - quadprog cont.
n=49; nq=n-1;
nOnes = ones(nq,1);
Q = diag(2*nOnes,0) - diag(nOnes(1 : nq−1),-1) - diag(nOnes(1 :nq − 1), 1);
Q=D*blkdiag(Q,Q);
q1=[D zeros(1, nq − 2) − 2 ∗ D];q2=[−D zeros(1, nq − 2) − 2 ∗ D] + m ∗ g ∗ ones(1, nq);q=[q1 q2]T;C = [zeros(nq, nq) − eye(nq)];d = zeros(nq,1);
x = quadprog(Q,q,C,d);
zet quadprog=x(nq + 1 : end);
![Page 57: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/57.jpg)
Calcularea solutiei - CVX
cvx begin
variable a(n+1); variable b(n+1);
V = 0;for i = 1 : nV = V +0.5∗D ∗(square(a(i +1)−a(i))+square(b(i +1)−b(i)));end
for i = 1 : (n + 1)V = V + m ∗ g ∗ b(i);end
minimize V
subject to
a(1) == −1; b(1) == 1; a(50) == 2; b(50) == 2b >= 0cvx end
zet cvx=b(2 : end − 1);
![Page 58: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/58.jpg)
Calcularea solutiei - gradient proiectat
x0=ones(2*nq,1); eps=0.0001;
eigenvalues=eig(Q); L=max(eigenvalues);
alpha= 1/L;
xg =x0; grad = Q ∗ xg + q;deltax=x0; iter g=0;
while (norm(deltax) > eps)xg next = xg − alpha ∗ grad;xg next(nq + 1 : end) = max(xg next(nq + 1 : end), 0);deltax=xg - xg next;
xg=xg next;
grad=Q ∗ xg + q;iter g = iter g + 1;
end
zet gradient=xg(nq + 1 : end);
![Page 59: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/59.jpg)
Calcularea solutiei - gradient conditional
x0=ones(2*nq,1); eps=0.002; iter cg=0;alpha= 2/(iter cg + 2);xcg =x0; grad = Q ∗ xcg + q; deltax=x0;
while (norm(deltax) > eps)s=linprog(grad,C,d,[],[],−2∗ones(2∗nq, 1), 2∗ones(2∗nq, 1));xcg next = xcg + alpha ∗ (s − xcg);deltax = xcg - xcg next;
xcg=xcg next;
grad=Q ∗ xcg + q;iter cg = iter cg + 1;alpha = 2/(iter cg + 2);end
zet cond gradient=xcg(nq + 1 : end);
![Page 60: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/60.jpg)
Lant suspendat - rezultate (plot)
nr of active constraints: 8
nr iteratii gradient proiectat: 3.382
nr iteratii gradient conditional: 10.579
![Page 61: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/61.jpg)
Aplicatia matrix completionI se da o matrice X cu elemente lipsa (e.g. o imagine)
I se presupune ca matricea X este de rang scazut!
I scopul este sa se gaseasca elementele lipsa din X
I pentru a impune rang mic asupra unei matrici se folosestenuclear norm ‖ · ‖∗: daca A are descompunerea DVSA =
∑ri=1 σiuiv
Ti , atunci
‖A‖∗ =r∑
i=1
σi
I exista diferite formulari - prezentam doua formulari de baza:
(P1) : minX∈Rm×n
rang(X )︷︸︸︷↔sau
(P2) : minX∈Rm×n
∑i ,j∈Ω
‖Xij − Aij‖2
s.l.: Xij = Aij ∀i , j ∈ Ω s.l.: rang(X ) ≤ r
si relaxarile (convexe) corespunzatoare.
![Page 62: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/62.jpg)
Aplicatia matrix completion cont.Prezentam diferite formulari convexe (relaxari) pentru problemaneconvexa (P1):
(P1) : minX∈Rm×n
rang(X )︷︸︸︷=⇒
relaxare convexamin
X∈Rm×n‖X‖∗
s.l.: Xij = Aij ∀i , j ∈ Ω s.l.: Xij = Aij ∀i , j ∈ Ω
unde se dau valorile Aij cu (i , j) ∈ Ω o submultime a elementelormatricei cautate si ‖X‖∗ =
∑ri=1 σi . Utilizand urmatorul rezultat
matriceal (ne-trivial!):
‖X‖∗ ≤ δ ⇔ ∃W1,W2 : tr(W1) + tr(W2) ≤ 2δ &
[W1 XXT W2
] 0
Obtinem ca relaxarea convexa se poate scrie ca un SDP:
(P1− sdp) : minX ,W1,W2
tr(W1) + tr(W2)
s.l.: Xij = Aij ∀i , j ∈ Ω,
[W1 XXT W2
] 0
Acest SDP se poate rezolva cu CVX (metode de punct interior)!
![Page 63: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/63.jpg)
Implementare CVX - SDP pentru (P1)Relaxarea convexa de tip SDP pentru (P1):
minX ,W1,W2
tr(W1) + tr(W2) s.l.: Xij = Aij ∀i , j ∈ Ω,
[W1 XXT W2
] 0
N = 16; r = 2; df = 2 ∗ N ∗ r − r2; %rang=2
nSamples = 3 ∗ df ; % nr observed entries
iMax = 5;A = randi(iMax ,N, r)∗randi(iMax , r ,N); % Our matrix
rPerm = randperm(N2); omega = sort(rPerm(1 : nSamples));% omega set of observed entries
Y = nan(N);Y (omega) = A(omega); disp(Y )cvx begin sdp
variable W1(N,N) symmetric
variable W2(N,N) symmetric
variable X(N,N)
minimize(trace(W 1) + trace(W 2))X (omega) == A(omega)[W 1 X ;XT W 2] >= 0cvx end
![Page 64: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/64.jpg)
Implementare (sub)gradient - (P1)
(P1) : relaxare convexa minX∈Rm×n
‖X‖∗
s.l.: Xij = Aij ∀i , j ∈ Ω
Pentru a calcula (sub)gradientul functiei f (X ) = ‖X‖∗ se folosesteDVS-ul lui X = UΣV T :
∇f (X ) = UV T
Deci putem implementa urmatoarea iteratie (e.g. c = 1 si X0 = 0si X0(Ω) = A(Ω)):
Xk+1 = [Xk − αk∇f (Xk)]Ω, αk =c
k
explicit (Xk = UkΣkVTk )
Xk+1 =[Xk −
c
kUkV
Tk
]Ω
![Page 65: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/65.jpg)
Aplicatia matrix completion cont.
Diferite formulari (relaxari) pentru probl. neconvexa (P2):
(P2) : minX∈Rm×n,rang(X )=r
‖P(X− A)‖2︷ ︸︸ ︷relaxareneconvexa
minU∈Rm×r ,V∈Rn×r
‖P(UV T−A)‖2
unde P operator liniar de proiectie pe componente
〈Eij ,X − A〉 = 0⇔ Trace(ETij (X − A)) = 0 ∀i , j ∈ Ω.
Observam ca am obtinut un CMMP neliniar (fara constrangeri)peste spatiul matricilor:
(P2− cmmp) : minU∈Rm×r ,V∈Rn×r
‖P(UV T − A)‖2
Putem aplica metode de tip Gauss-Newton (implementati)!
![Page 66: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/66.jpg)
Aplicatia matrix completion cont.Diferite formulari (relaxari) pentru probl. neconvexa (P2):
(P2) : minX∈Rm×n,rang(X )=r
‖P(X− A)‖2︷ ︸︸ ︷relaxareconvexa
minX∈Rm×n,‖X‖∗≤δr
‖P(X−A)‖2
unde P operator liniar de proiectie pe componente
Utilizand iarasi rezultatul matriceal:
‖X‖∗ ≤ δ ⇔ ∃W1,W2 : tr(W1) + tr(W2) ≤ 2δ &
[W1 XXT W2
] 0
obtinem o relaxare de tip SDP pentru (P2):
(P2− sdp) : minX ,W1,W2
∑i ,j∈Ω
(Xij − Aij)2
s.l.: tr(W1) + tr(W2) ≤ 2δr ,
[W1 XXT W2
] 0
Acest SDP se poate rezolva cu CVX sau cu gradientul conditional(implementati)!
![Page 67: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/67.jpg)
Implementare gradient conditional - SDP pentru (P2)Problema SDP pentru (P2) de forma
minX
f (X ) =∑i ,j∈Ω
(Xij − Aij)2 s.l.: ‖X‖∗ ≤ δr
este potrivita pentru a fi rezolvata cu metoda gradient conditional:I calculul gradientlui se poate face eficient daca ]Ω este mica
∇f (X ) = P(X − A)
I subproblema de la fiecare pas se poate rezolva explicit
Sk = arg minX〈∇f (Xk),X 〉 s.l.: ‖X‖∗ ≤ δr
considerand cea mai mare valoare singulara σ1 a lui∆ = ∇f (Xk) cu vectorii singulari u1 (stang) si v1 (drept):
Sk = −δru1vT1 .
I subproblema de la metoda gradient proiectat necesitadescompunerea SVD completa!
I obtinem o iteratie de forma (updatari de rang= 1)
Xk+1 = (1− αk)Xk − αkδr (u1vT1 ), αk = 2/(k + 2).
![Page 68: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/68.jpg)
Implementare gradient conditional - SDP pentru (P2)I subproblema de la gradient conditional este de forma:
S∗ = arg minX〈C ,X 〉 s.l.: ‖X‖∗ ≤ δr
a carui solutie presupune calcularea vectorilor singularidominanti u1 si v1 a lui C , i.e. S∗ = −δru1v
T1 (cost O(mn)).
I subproblema de la gradient proiectat este de forma:
S∗ = arg minX‖X − C‖2 s.l.: ‖X‖∗ ≤ δr
presupune calcularea DVS completa a lui C (cost O(m2n)).
Calculul val. singulare max. C ∈ Rm×n cu metoda puterii:
Input: matrix C, initial vector x = x0, error = 1, epsOutput: prima val. singulara σ1 si vectorii singulari u1, v1
satisfacand: Cv1 = σ1u1
while error > eps do:
x = CT ∗ C ∗ x ; v = x/norm(x); sigma1 = norm(C ∗ v);u = C ∗ v/sigma1; error = ‖C ∗ v − σ1 ∗ u‖end
![Page 69: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/69.jpg)
Implementare gradient conditional - SDP pentru (P2)
X = randi(iMax ,N, 1) ∗ randi(iMax , 1,N);eror = norm(X );accur = 0.001; iter = 0;while (eror > accur)grad = zeros(N,N);grad(omega) = X (omega)− A(omega);[u, v ] = dominant sing values(grad);alpha = 2/(2 + iter);Xnext = (1− alpha) ∗ X − alpha ∗ delta ∗ u ∗ vT;eror = norm(Xnext − X );X = Xnext;iter = iter + 1;end
![Page 70: Tehnici de Optimizare Laborator TO141.85.225.150/courses/SlidesTOLab.pdf · Aplicatia CMMP - identi care parametri Avem un arc cu masa m s, constanta arcului D, si constanta de damping](https://reader030.vdocuments.net/reader030/viewer/2022040122/5d671a9288c9931f758ba66e/html5/thumbnails/70.jpg)
Aplicatia matrix completionRecuperarea de imagini folosind una din relaxarile convexe (SDP):
minX∈Rm×n,P(X−A)=0
‖X‖∗︷︸︸︷=⇒SAU
minX∈Rm×n,‖X‖∗≤δr
‖P(X−A)‖2
imaginea data (40% elemente lipsa) si imaginea recuperata