chapter 9 misc topics - electrical engineering and ...fessler/course/598/l/n-09-misc.pdfchapter 9...

19
Chapter 9 Misc topics Contents (class version) 9.0 Review ........................................... 9.2 Binary classifier design: Review .................................. 9.4 9.1 If time had permitted... .................................. 9.13 9.2 Towards CNN methods .................................. 9.14 Review of denoising by soft thresholding ............................. 9.14 Review of compressed sensing with transform sparsity ...................... 9.15 Review of patch transform sparsity ................................ 9.16 9.1

Upload: others

Post on 13-Oct-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

Chapter 9

Misc topics

Contents (class version)9.0 Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2

Binary classifier design: Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.4

9.1 If time had permitted... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.13

9.2 Towards CNN methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14Review of denoising by soft thresholding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.14

Review of compressed sensing with transform sparsity . . . . . . . . . . . . . . . . . . . . . . 9.15

Review of patch transform sparsity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.16

9.1

Page 2: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.2

9.0 Review

Final course notes on gdrive.

Two principal kinds of questions.• Procedural. Given Ψ(x), given algorithm (by name), then implement/analyze it

Interesting Exam2 sample questions (by pdf file page)• p1: composite with ‖x‖1 and box constraint• p20: LS + hinge regularizer• p24: sigmoid as smooth 0-1 loss• p43: prox for leaky ReLU• p44: hinge + 0-norm

• Conceptual. Given application, then determine Ψ(x), choose algorithm, and implement/analyze it• p2: ‖x‖0 constraint• p6: L+S• p45: compressed sensing with transform/analysis sparsity

Page 3: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.3

Caroline’s review diagram

Unconstrained Objective functionsmooth non smooth

Slinesearchit in generalform smooth tconsiderstochastic GD proxfriendly elsequadratic else LipschkL y f ftp.fffEi7twbm9jorners SPOEM gpoxfriendly conditions Alternatingv proximal non L met

Celica CBOEM If'EIEEf Emmi Mufasa n ie.IEIhIpiEYeampreconditioned preconditioned Huber'snonlinear optimized CPGMonpaper convexity major.zerconjugategradient gradient quadraticmm problemisconverges in method easyin elseN unknowns NCG each

blockiterations L sBCDIBCM ADMMblockcoordinate alternating

directiondescent methodofminimization multipliersGeneral StrategiesRewrite as ajointcostfunction E Things neverappearinghereWriteconstraintsusingthecharacteristic function IPSD PGD use EP OEMRewritetomakeconvexe.g 1111 _Utv FGM use 0GW

or linear 1111 at 4 exetz fPGM use POEM

Page 4: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.4

Binary classifier design: Review

General cost function for binary classifier design for sign(v′x) with x ∈ RN , A ∈ RM×N , β ≥ 0:

Ψ(x) = f(x) + β ‖x‖pp , f(x) = 1′Mh.(Ax), p ∈ {1, 2}

Quadratic

h(t) =1

2(t− 1)2 =⇒ f(x) = 1′Mh.(Ax) =

1

2‖Ax− 1M‖22 Picture

Method for p = 2?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? PGD, PSD, FGM, OGM-LS, MM

Method for p = 1?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? GP u-v, PGM, FPGM, OGM-LS, MM

Page 5: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.5

ExampleSame 5/8 handwritten digit data as in HW

0 2 4 6 8

75

80

85

90

95

100

Acc

urac

y

Validation bothTest bothValidation 8Test 8Validation 5Test 5

Page 6: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.6

1 28

1

27

LS classifier weights

-0.0020

-0.0015

-0.0010

-0.0005

0

0.0005

0.0010

0.0015

0.0020

1 28

1

27

Best regularized LS weights

-1.00

-0.75

-0.50

-0.25

0

0.25

0.50

0.75

Page 7: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.7

Logistic / Huber hinge

These choices for h are convex, smooth, and we know their Lipschitz constants and their (optimal for Huberhinge) quadratic majorizers.

Method for p = 2?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? PGD, PSD, CG-GD1d, FGM, OGM-LS, MM (convex)

Method for p = 1?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? GP u-v, PGM, FPGM, OGM-LS, MM

Page 8: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.8

Hinge

Convex, but non-smooth.

Method for p = 2?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? MM (convex, but difficult), duality (545)

Method for p = 1?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? MM (convex, but difficult), SG

Page 9: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.9

Sigmoid (smooth version of 0-1) loss

nonconvex but smooth; inspired by w20 studentInitialization x0 likely matters.

Method for p = 2?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? SG

Quadratic majorizer (cf Huber hinge)

Method for p = 1?A: GD B: CG C: OGM D: POGM E: ADMM ??

Other options? SG

Page 10: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.10

0-1 loss

Start with convex hinge, use that to initialize sigmoid, then let sigmoid width→ 0.

Page 11: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.11

(blank)

Page 12: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.12

(blank)

Page 13: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.13

9.1 If time had permitted...

recommender systems

neural networks

parallel computing

semidefinite programming

non-convex regularizers that lead to convex cost functions [9]

relationships of image models and priors for restoration problems [11]

Regularization parameter selection

SURE etc.

CNN training using SURE without ground-truth images [1] [2]

Optimization on manifolds

Minimization subject to constraints like a matrix being unitary (Stiefel manifold) [5–8]

Page 14: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.14

9.2 Towards CNN methods

This section reviews some iterative algorithms based on sparsity models and summarizes how those algo-rithms provide a foundation for “variational neural networks” when “unrolled.”

Review of denoising by soft thresholding

Consider the measurement model y = x+ε and the signal model that assumes Tx is sparse for some unitarytransform T . The natural optimization problem for estimating x is

x̂ = arg minx

1

2‖x− y‖22 + β ‖Tx‖1 .

The non-iterative solution is the following denoising operation:

x̂ = T ′ soft .(Ty,β).

The essential ingredients of a convolutional neural network (CNN) are present in this simple form:some filters, a nonlinearity, and more filters.

Page 15: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.15

Review of compressed sensing with transform sparsity

For the compressed sensing measurement model y = Ax+ε, again assuming Tx is sparse for some unitarytransform T , the natural optimization problem for estimating x is

x̂ = arg minx

1

2‖Ax− y‖22 + β ‖Tx‖1 .

In this case there is no closed-form solution and iterative algorithms are needed. The simplest algorithm isthe proximal gradient method (PGM), aka ISTA, which is an MM update based on the majorizer

x̃k = xk −1

LA′(Axk − y)

φk(x) =L

2‖x− x̃k‖22 + β ‖Tx‖1 ,

where L = |||A|||22, for which the minimization step is a denoising operation:

xk+1 = arg minx

φk(x) = T ′ soft .(T x̃k,β/L).

The “unrolled loop” block diagram for this algorithm is the basis for learned ISTA (LISTA) [3]:

y → x0 → data → x̃0 → denoise → x1 → data → x̃1 → denoise → x2 → data → x̃2 · · ·

Denoise options: hand-crafted transform (e.g., wavelets), transform learned from training data, CNN

Can learn: T β soft T ′ Early SURE-LET work: [4].

Page 16: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.16

Review of patch transform sparsity

The natural cost function for a patch transform sparse model is

x̂ = arg minx

1

2‖Ax− y‖22 + βR(x), R(x) = min

Z

P∑p=1

1

2‖TPpx− zp‖22 + α ‖zp‖1 .

For a BCD approach, the Z update is simple:

z(t+1)p = soft .(TPpxt, α).

To better understand the x update, it is helpful to expand the regularizer:

P∑p=1

1

2

∥∥TPpx− z(t+1)p

∥∥22

=1

2x′

(P∑

p=1

P ′pT′TPp

)x− x′

(P∑

p=1

PpTz(t+1)p

)+ c1

=1

2x′Hx− x′x̃t + c1, H ,

P∑p=1

P ′pT′TPp, x̃t ,

P∑p=1

PpTz(t+1)p

If T has orthonormal columns (e.g., is unitary), and if the patches have d pixels and are chosen with stride=1

Page 17: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.17

and periodic boundary conditions, then

H =P∑

p=1

P ′pT′TPp =

P∑p=1

P ′pPp = dI.

Completing the square for the regularizer term yields

... =d

2x′x− x′x̃t + c1 =

d

2‖x− x̄t‖22 + c2, x̄t ,

1

d

P∑p=1

PpTz(t+1)p =

1

d

P∑p=1

PpT soft .(TPpxt, α).

Thus the x update for the BCD algorithm is

xt+1 = arg minx

1

2‖Ax− y‖22 + β

d

2‖x− x̄t‖22 .

Here, x̄t acts like a prior for the update. For some cases there is a closed-form solution (like single-coilCartesian MRI). Otherwise, one or more iterations are needed.

Unrolling the BCD loop in block diagram form:

y → x0 → denoise → x̄0 → data → x1 → denoise → x̄1 → data → x2 → denoise → x̄2 · · ·

Denoise options: hand-crafted transform (e.g., wavelets), transform learned from training data, CNN

Page 18: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.18

127IEEE SIgnal ProcESSIng MagazInE | May 2019 |

advERtISInG & SalES

© graphic stock

Digital Object Identifier 10.1109/MSP.2019.2908299

The Advertisers Index contained in this issue is compiled as a service to our readers and advertisers: the publisher is not liable for errors or omissions although every effort is made to ensure its accuracy. Be sure to let our advertisers know you found them through IEEE Signal Processing Magazine.

Company page number website phoneMathWorks CVR 4 mathworks.com/deeplearning

IEEE sIgnal procEssIng MagazInE rEprEsEntatIvEErik Henson, Naylor Association Solutions, Phone: +1 352 333 3443, Fax: +1 352 331 3525, [email protected]

hUmoR

Digital Object Identifier 10.1109/MSP.2019.2908298 Date of publication: 26 April 2019

Machine-Learning Class by Robert W. Heath, Jr. and Nuria González-Prelcic

SP

Page 19: Chapter 9 Misc topics - Electrical Engineering and ...fessler/course/598/l/n-09-misc.pdfChapter 9 Misc topics Contents (class version) 9.0 Review. . . . . . . . . . . . . . . .

© J. Fessler, April 21, 2020, 08:53 (class version) 9.19

Bibliography

[1] C. A. Metzler, A. Mousavi, R. Heckel, and R. G. Baraniuk. Unsupervised learning with Stein’s unbiased risk estimator. 2018 (cit. on p. 9.13).

[2] S. Soltanayev and S. Y. Chun. “Training deep learning based denoisers without ground truth data”. In: Neural Info. Proc. Sys. 2018, 3257–67 (cit. onp. 9.13).

[3] K. Gregor and Y. LeCun. “Learning fast approximations of sparse coding”. In: Proc. Intl. Conf. Mach. Learn. 2010 (cit. on p. 9.15).

[4] T. Blu and F. Luisier. “The SURE-LET approach to image denoising”. In: IEEE Trans. Im. Proc. 16.11 (Nov. 2007), 2778–86 (cit. on p. 9.15).

[5] A. Edelman, Tomás A Arias, and S. T. Smith. “The geometry of algorithms with orthogonality constraints”. In: SIAM J. Matrix. Anal. Appl. 20.2 (1998),303–53 (cit. on p. 9.13).

[6] T. E. Abrudan, J. Eriksson, and V. Koivunen. “Steepest descent algorithms for optimization under unitary matrix constraint”. In: IEEE Trans. Sig. Proc.56.3 (Mar. 2008), 1134–47 (cit. on p. 9.13).

[7] B. Gao, X. Liu, X. Chen, and Y. Yuan. “A new first-order algorithmic framework for optimization problems with orthogonality constraints”. In: SIAM J.Optim. 28.1 (Jan. 2018), 302–332 (cit. on p. 9.13).

[8] S. Chen, S. Ma, A. M-C. So, and T. Zhang. “Proximal gradient method for nonsmooth optimization over the Stiefel manifold”. In: SIAM J. Optim. 30.1(Jan. 2020), 210–39 (cit. on p. 9.13).

[9] A. Lanza, S. Morigi, I. W. Selesnick, and F. Sgallari. “Sparsity-inducing nonconvex nonseparable regularization for convex image processing”. In: SIAMJ. Imaging Sci. 12.2 (Jan. 2019), 1099–34 (cit. on p. 9.13).

[10] R. T. Rockafellar. “Lagrange multipliers and optimality”. In: SIAM Review 35.2 (June 1993), 183–238.

[11] B. Wen, Y. Li, Y. Li, and Y. Bresler. A set-theoretic study of the relationships of image models and priors for restoration problems. 2020 (cit. on p. 9.13).

[12] W. Zuo and Z. Lin. “A generalized accelerated proximal gradient approach for total-variation-based image restoration”. In: IEEE Trans. Im. Proc. 20.10(Oct. 2011), 2748–59.