cmr17 - ilas 2013

94

Upload: others

Post on 12-Apr-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CMR17 - ILAS 2013
Page 2: CMR17 - ILAS 2013
Page 3: CMR17 - ILAS 2013

ILAS 2013Providence, RI, June 3-7, 2013

The 18th Conference of the International Linear Algebra Society

Local Organizing Committee

• Tom Bella (chair), University of Rhode Island

• Misha Kilmer, Tufts University

• Steven Leon, UMass Dartmouth

• Vadim Olshevsky, University of Connecticut

Scientific Organizing Committee

• Tom Bella (chair)

• Vadim Olshevsky (chair)

• Ljiljana Cvetkovic

• Heike Fassbender

• Chen Greif

• J. William Helton

• Olga Holtz

• Steve Kirkland

• Victor Pan

• Panayiotis Psarrakos

• Tin Yau Tam

• Paul Van Dooren

• Hugo Woerdeman

Conference Staff

• Kate Bella

• Bill Jamieson

• Michael Krul

• Heath Loder

• Caitlin Phifer

• Jenna Reis

• Daniel Richmond

• Diana Smith

• Daniel Sorenson

1

Page 4: CMR17 - ILAS 2013

Contents

Conference Organizing Committees and Staff 1

Conference Information 3

Host Institution and Conference Venue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Travel Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Registration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

WiFi Internet Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Presentation Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Coffee Breaks, Lunches, and Dinners . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Welcome Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Conference Excursion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Conference Banquet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

ILAS Membership Business Meeting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Conference Proceedings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Schedules 7

Schedule Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Sunday, June 2nd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Monday, June 3rd . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Tuesday, June 4th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Wednesday, June 5th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Thursday, June 6th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Friday, June 7th . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Session Schedules and Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

Minisymposia 36

Abstracts 38

Speaker Index 91

2

Page 5: CMR17 - ILAS 2013

Conference Information

Host Institution and Conference Venue

The University of Rhode Island, the state’s current Land, Sea and Urban Grant public research institution,is the host institution for the conference. The conference venue is the Crowne Plaza Providence–Warwick.All talks, events, and the meeting banquet will be held at the Crowne Plaza. Located just minutes fromdowntown Providence, the Crowne Plaza is also easily accessed from the airport, as they offer a free, 24 hourshuttle service to and from Green Airport. The address is

Crowne Plaza Providence–Warwick801 Greenwich Ave

Warwick, Rhode Island 02886

Travel Information

TF Greene International Airport (Airport Code PVD) - Green is located two miles away from theCrowne Plaza, and less than a mile from the Holiday Inn Express (both hotels offer a free 24 hour shuttleto and from TF Green). Details on ground transportation from Green may be found at http://www.

pvdairport.com/main.aspx?sec_id=59.

Logan International Airport (Boston) (Airport Code BOS) - Logan Airport is 62 miles from the CrownePlaza, and is easily reachable by car via I-95 South, or by train into Providence (see below).

Trains (MBTA Commuter Rail, Amtrak, etc.) - The Providence Train Station is located downtown,12 miles from the Crowne Plaza. The hotel may be reached by taxi.

Registration

The registration desk is located outside of the Grand Ballroom, near the Rotunda (see map). The hours ofoperation of the registration desk are:

Sunday, June 2nd 6:00pm - 8:00pmMonday, June 3rd 7:00am - 12:00pm

1:30pm - 7:00pmTuesday, June 4th 7:00am - 12:00pm

1:30pm - 3:30pmWednesday, June 5th 8:00am - 10:00amThursday, June 6th 8:00am - 10:00am

1:30pm - 3:30pmFriday, June 6th 8:00am - 10:00am

Attendees who have already paid the registration fee may pick up their materials by dropping by theregistration desk. If you have not yet paid the registration fee, you may do so at the registration desk bycash or check (in US dollars).

3

Page 6: CMR17 - ILAS 2013

If you have any questions, please stop by and ask our staff.

WiFi Internet Connection

Wireless internet is available throughout the meeting spaces in the Crowne Plaza to all attendees. The wire-less access point to connect to is crowne1 (the last character is the numeral “one”) and the password (whichis case–sensitive) is Advantage7. Guests at the Crowne Plaza should note that this access information maybe different than the information in guestrooms.

Presentation Equipment

All meeting rooms are equipped with a high–lumen projector and large projection screen, together with alaptop computer and remote control/laser pointer. The provided laptop may be used to present a talk on aUSB jumpdrive in either PDF or PPT format (though in general not all PPT formats seem to be compatiblewith all PPT viewers). Presenters may also connect their own computers directly to the projector.

We encourage all presenters to preload the PDF for their talk onto the laptop computer beforehand, andensure that it is displaying correctly. Presenters using their own laptops should check compatibility with theprojector in advance, if possible.

Coffee Breaks, Lunches, and Dinners

Coffee, tea, and juice will be available in the Grand Foyer each morning from 7:15am until 8:15am. After themorning plenary sessions, there will be a coffee break with snacks from 9:30am until 10am. The afternooncoffee breaks will be held between the afternoon parallel sessions, and concurrent with the ILAS BusinessMeeting on Thursday.

During lunch and dinner, attendees will be on their own. Some options are:

• Concession Lunches – Grand Foyer – The Crowne Plaza will have lunches available for sale in theGrand Foyer for ILAS attendees.

• Remington’s Restaurant – Guests can enjoy fine American Cuisine and a friendly dining atmospherewithout ever leaving the hotel. Remington’s Restaurant serves breakfast, lunch and dinner.

• Alfred’s Lounge – Alfred’s Lounge offers a traditional library style decor with oversized luscious leathersofas and a well appointed fireplace. Alfred’s Lounge features over 100 wine selections by the glass.Also serving dinner and light fare.

• Area Restaurants – The Crowne Plaza operates a shuttle service for small groups to nearby restaurants.Contact the hotel front desk (not the ILAS registration desk) for details.

• Room Service is also available 24 hours.

Welcome Reception

There will be a welcome reception for attendees arriving the night before the conference, Sunday June 2nd,from 6:00pm until 8:00pm. The welcome reception will be in the Rotunda. If you are arriving on Sunday orearlier, please stop by! The registration desk will be open during this time as well, so you may check in andpick up your attendee materials early if you wish.

Conference Excursion

The Newport Mansions are a major tourist attraction in Rhode Island. Details may be found on theirwebsite at http://www.newportmansions.org/. The excursion to Newport will be Wednesday afternoon,

4

Page 7: CMR17 - ILAS 2013

and the $25 fee includes bus transportation to and from Newport and admission into the mansion visited.Due to high demand, tickets for the excursion are almost sold out at this time, but any remaining ticketsmay be purchased online or at the registration desk (with cash or checks in US Dollars) while supplies last.

Attendees with tickets will have the opportunity to sign up to visit either The Breakers or Rosecliff. Twobuses will be going to The Breakers, and one bus will be going to Rosecliff. If you have a preference of whichmansion you’d like to visit, please let us know at the registration desk.

The Breakers RosecliffThe Breakers is the grandest of Newport’s sum-mer “cottages” and a symbol of the Vanderbiltfamily’s social and financial preeminence in turnof the century America. Commodore CorneliusVanderbilt (1794-1877) established the family for-tune in steamships and later in the New York Cen-tral Railroad, which was a pivotal development inthe industrial growth of the nation during the late19th century.

Commissioned by Nevada silver heiress TheresaFair Oelrichs in 1899, architect Stanford Whitemodeled Rosecliff after the Grand Trianon, thegarden retreat of French kings at Versailles.Scenes from several films have been shot on lo-cation at Rosecliff, including The Great Gatsby,True Lies, Amistad and 27 Dresses.

(descriptions and photographs courtesy of The Preservation Society of Newport County)

Tentative Schedule for the Newport Excursion

Wednesday 1:00pm Meet buses outside of Crowne PlazaWednesday 2:45pm Arrive at The Breakers or Rosecliff

Explore Mansion and grounds, 90 minutesWednesday 4:30pm Meet back at busesWednesday 5:00pm Arrive in downtown Newport

Attendees on their own for dinnerWednesday 8:00pm Meet back at busesWednesday 9:30pm Arrive back at Crowne Plaza

After visiting the mansions, attendees will have the opportunity to visit the downtown Newport area fordinner (attendees on their own). A (very incomplete) list of some local restaurants is given below.

Some Restaurants in Downtown Newport

Black Pearl 401.846.5264 blackpearlnewport.com

Brick Alley Pub and Restaurant 401.849.6334 brickalley.com

Busker’s Pub and Restaurant 401.846.5856 buskerspub.com

Fluke Wine Bar and Kitchen 401.849.7778 flukewinebar.com

Gas Lamp Grille 401.845.9300 gaslampgrille.com

Lucia Italian Restaurant and Pizzeria 401.846.4477 luciarestaurant.com

Panera Bread 401.324.6800 panerabread.com

The Barking Crab 401.846.2722 barkingcrab.com

Wharf Pub and Restaurant 401.846.9233 thewharfpub.com

Conference Banquet

The conference banquet will be held on Thursday night, June 6th, from 8pm until 10:30pm. Tickets for thebanquet are not included in the conference registration, and may be purchased in advance online through

5

Page 8: CMR17 - ILAS 2013

the registration system for $50 each, or at the registration desk (with cash or checks in US Dollars) whilesupplies last. During the banquet, the Hans Schneider Prize will be presented to Thomas Laffey.

ILAS Membership Business Meeting

ILAS will hold its Membership Business Meeting on Thursday afternoon, 3:30pm – 5:30pm, in the GrandBallroom.

Conference Proceedings

The proceedings of the 18th Conference of the International Linear Algebra Society, Providence, USA, June3-7, 2013, will appear as a special issue of Linear Algebra and Its Applications.

The Guest Editors for this issue are:

• Chen Greif,

• Panayiotis Psarrakos,

• Vadim Olshevsky, and

• Hugo J. Woerdeman.

The responsible Editor-in-Chief of the special issue is Peter Semrl.

The deadline for submissions will be October 31, 2013.

All papers submitted for the conference proceedings will be subject to the usual LAA refereeing process.

They should be submitted via the Elsevier Editorial System (EES): http://ees.elsevier.com/laa/

When selecting Article Type please choose Special Issue: 18th ILAS Conference and when requesting aneditor please select the responsible Editor-in-Chief P. Semrl. You will have a possibility to suggest one ofthe four guest editors that you would like to handle your paper.

6

Page 9: CMR17 - ILAS 2013

ILAS 2013 Schedule Overview

Opening Remarks

Anne Greenbaum Raymond Sze Thomas Laffey Fuzhen Zhang Dan Spielman

Leiba Rodman Maryam Fazel Gilbert Strang Dianne O’Leary Jean Bernard Lasserre

Coffee Break Coffee Break Coffee Break Coffee Break Coffee Break

Linear Algebra Problems inQuantum Computation

Nonlinear Eigenvalue Problems

Advances in Combinatorial Ma-trix Theory and its Ap...

Generalized Inverses and Appli-cations

Linear Least Squares Methods:Algorithms, Analysis...

Linear Algebra Problems inQuantum Computation

Randomized Matrix Algo-rithms

Matrix Methods for PolynomialRoot-Finding

Multilinear Algebra and TensorDecompositions

Linear Algebra, Control, andOptimization

Contributed Session on Compu-tational Science

Matrices and Graph Theory

Symbolic Matrix Algorithms

Matrices and Orthogonal Poly-nomials

Structure and Randomizationin Matrix Computations

Contributed Session on Numer-ical Range and Spectra...

Matrices and Graph Theory

Symbolic Matrix Algorithms

Matrices and Orthogonal Poly-nomials

Structure and Randomizationin Matrix Computations

Contributed Session on MatrixPencils

Contributed Session on Numer-ical Linear Algebra

Matrices and Orthogonal Poly-nomials

Abstract Interpolation and Lin-ear Algebra

Linear Algebra Education Is-sues

Matrix Inequalities

Sign Pattern Matrices

Contributed Session on Non-negative Matrices

Lunch Lunch Lunch Lunch Lunch

Alan Edelman Ivan Oseledets Excursion: Newport Mansions Ravi Kannan

Linear Algebra Problems inQuantum Computation

Krylov Subspace Methods forLinear Systems

Nonlinear Eigenvalue Problems

Advances in Combinatorial Ma-trix Theory and its Ap...

Generalized Inverses and Appli-cations

Contributed Session on Algebraand Matrices over A...

Matrices and Graph Theory

Matrix Methods for PolynomialRoot-Finding

Multilinear Algebra and TensorDecompositions

Linear Complementarity Prob-lems and Beyond

Linear Algebra, Control, andOptimization

Contributed Session on Numer-ical Linear Algebra

Matrices and Orthogonal Poly-nomials

Linear Algebra Education Is-sues

Structure and Randomizationin Matrix Computations

Linear and Nonlinear Perron-Frobenius Theory

Contributed Session on GraphTheory

Contributed Session on Non-negative Matrices

Structured Matrix Functionsand their Applications...

Linear Algebra Education Is-sues

Matrix Inequalities

Matrices and Total Positivity

Contributed Session on PositiveDefinite Matrices

Coffee Break Coffee Break ILAS Business Meeting Coffee Break

Krylov Subspace Methods forLinear Systems

Nonlinear Eigenvalue Problems

Matrices over IdempotentSemirings

Advances in Combinatorial Ma-trix Theory and its Ap...

Contributed Session on Algebraand Matrices over A...

Linear Algebra Problems inQuantum Computation

Matrices and Graph Theory

Matrix Methods for PolynomialRoot-Finding

Multilinear Algebra and TensorDecompositions

Applications of Tropical Math-ematics

Contributed Session on Numer-ical Linear Algebra

Abstract Interpolation and Lin-ear Algebra

Structured Matrix Functionsand their Applications...

Matrix Inequalities

Linear and Nonlinear Perron-Frobenius Theory

Sign Pattern Matrices

Contributed Session on MatrixCompletion Problems

Linear Algebra Problems inQuantum Computation

Structured Matrix Functionsand their Applications...

Applications of Tropical Math-ematics

Matrix Methods in Computa-tional Systems Biology an...

Matrices and Total Positivity

Contributed Session on MatrixEqualities and Inequ...

ILAS Board Meeting Banquet Closing Remarks

7

Page 10: CMR17 - ILAS 2013

Sunday

Sunday, 06:00PM - 08:00PMLocation: Rotunda

Welcome Reception and Registration

Monday

Monday, 08:00AM - 08:45AMLocation: Grand Ballroom

Anne Greenbaum (University of Washington) (SIAG LAspeaker*)Session Chair: Stephen Kirkland

Monday, 08:45AM - 09:30AMLocation: Grand Ballroom

Leiba Rodman (College of William and Mary) (LAMA Lec-ture)Session Chair: Stephen Kirkland

Monday, 09:30AM - 10:00AMLocation: Grand Foyer

Coffee Break

Monday, 10:00AM - 12:00PMLocation: Grand Ballroom

Linear Algebra Problems in Quantum Computation10:00 - 10:30 – Jinchuan Hou10:30 - 11:00 – Lajos Molnar11:00 - 11:30 – Zejun Huang11:30 - 12:00 – Gergo Nagy

Monday, 10:00AM - 12:00PMLocation: Bristol A

Advances in Combinatorial Matrix Theory and its Applications10:00 - 10:30 – Carlos da Fonseca10:30 - 11:00 – Ferenc Szollosi11:00 - 11:30 – Geir Dahl11:30 - 12:00 – Judi McDonald

Monday, 10:00AM - 12:00PMLocation: Bristol B

Generalized Inverses and Applications10:00 - 10:30 – Minnie Catral10:30 - 11:00 – Nieves Castro-Gonzalez11:00 - 11:30 – Esther Dopazo

Monday, 10:00AM - 12:00PMLocation: Ocean

Nonlinear Eigenvalue Problems10:00 - 10:30 – Francoise Tisseur10:30 - 11:00 – Karl Meerbergen11:00 - 11:30 – Elias Jarlebring11:30 - 12:00 – Zhaojun Bai

Monday, 10:00AM - 12:00PMLocation: Patriots

Linear Least Squares Methods: Algorithms, Analysis, and Ap-plications10:00 - 10:30 – David Titley-Peloquin10:30 - 11:00 – Huaian Diao11:00 - 11:30 – Sanzheng Qiao11:30 - 12:00 – Keiichi Morikuni

Monday, 12:00PM - 01:20PMLocation: Grand Foyer

Lunch - Attendees on their own - Lunch on sale in Grand Foyer

Monday, 01:20PM - 02:05PMLocation: Grand Ballroom

Alan Edelman (MIT)Session Chair: Ilse Ipsen

8

Page 11: CMR17 - ILAS 2013

Monday, 02:15PM - 04:15PMLocation: Grand Ballroom

Linear Algebra Problems in Quantum Computation2:15 - 2:45 – David Kribs2:45 - 3:15 – Jianxin Chen3:15 - 3:45 – Nathaniel Johnston3:45 - 4:15 – Sarah Plosker

Monday, 02:15PM - 04:15PMLocation: Bristol A

Advances in Combinatorial Matrix Theory and its Applications2:15 - 2:45 – Gi-Sang Cheon2:45 - 3:15 – Kathleen Kiernan3:15 - 3:45 – Kevin Vander Meulen3:45 - 4:15 – Ryan Tifenbach

Monday, 02:15PM - 04:15PMLocation: Bristol B

Generalized Inverses and Applications2:15 - 2:45 – Huaian Diao2:45 - 3:15 – Chunyuan Deng3:15 - 3:45 – Dragana Cvetkovic-Ilic3:45 - 4:15 – Dijana Mosic

Monday, 02:15PM - 04:15PMLocation: Tiverton

Contributed Session on Algebra and Matrices over ArbitraryFields Part 12:15 - 2:35 – Polona Oblak2:35 - 2:55 – Gregor Dolinar3:15 - 3:35 – Leo Livshits3:35 - 3:55 – Peter Semrl

Monday, 02:15PM - 04:15PMLocation: Ocean

Nonlinear Eigenvalue Problems2:15 - 2:45 – Ion Zaballa2:45 - 3:15 – Shreemayee Bora3:15 - 3:45 – Leo Taslaman3:45 - 4:15 – Christian Mehl

Monday, 02:15PM - 04:15PMLocation: Patriots

Krylov Subspace Methods for Linear Systems2:15 - 2:45 – Eric de Sturler2:45 - 3:15 – Daniel Richmond3:15 - 3:45 – Andreas Stathopoulos3:45 - 4:15 – Daniel Szyld

Monday, 04:15PM - 04:45PMLocation: Grand Foyer

Coffee Break

Monday, 04:45PM - 06:45PMLocation: Grand Ballroom

Linear Algebra Problems in Quantum Computation4:45 - 5:15 – Tin-Yau Tam5:15 - 5:45 – Diane Christine Pelejo5:45 - 6:15 – Justyna Zwolak

Monday, 04:45PM - 06:45PMLocation: Bristol A

Advances in Combinatorial Matrix Theory and its Applications4:45 - 5:15 – In-Jae Kim5:15 - 5:45 – Charles Johnson

Monday, 04:45PM - 06:45PMLocation: Bristol B

Matrices over Idempotent Semirings4:45 - 5:15 – Sergei Sergeev5:15 - 5:45 – Jan Plavka5:45 - 6:15 – Hans Schneider6:15 - 6:45 – Martin Gavalec

Monday, 04:45PM - 06:45PMLocation: Tiverton

Contributed Session on Algebra and Matrices over ArbitraryFields Part 24:45 - 5:05 – Mikhail Stroev5:05 - 5:25 – Bart De Bruyn5:25 - 5:45 – Rachel Quinlan5:45 - 6:05 – Yorick Hardy

Monday, 04:45PM - 06:45PMLocation: Ocean

Nonlinear Eigenvalue Problems4:45 - 5:15 – D. Steven Mackey5:15 - 5:45 – Fernando De Teran5:45 - 6:15 – Vanni Noferini6:15 - 6:45 – Federico Poloni

Monday, 04:45PM - 06:45PMLocation: Patriots

Krylov Subspace Methods for Linear Systems4:45 - 5:15 – James Baglama5:15 - 5:45 – Jennifer Pestana5:45 - 6:15 – Xiaoye Li

9

Page 12: CMR17 - ILAS 2013

Tuesday

Tuesday, 08:00AM - 08:45AMLocation: Grand Ballroom

Raymond Sze (Hong Kong Polytechnic University)Session Chair: Tin-Yau Tam

Tuesday, 08:45AM - 09:30AMLocation: Grand Ballroom

Maryam Fazel (University of Washington)Session Chair: Tin-Yau Tam

Tuesday, 09:30AM - 10:00AMLocation: Grand Foyer

Coffee Break

Tuesday, 10:00AM - 12:00PMLocation: Grand Ballroom

Linear Algebra Problems in Quantum Computation10:00 - 10:30 – Edward Poon10:30 - 11:00 – Seung-Hyeok Kye11:00 - 11:30 – Shmuel Friedland11:30 - 12:00 – Xiaofei Qi

Tuesday, 10:00AM - 12:00PMLocation: Bristol A

Randomized Matrix Algorithms10:00 - 10:30 – Ilse Ipsen10:30 - 11:00 – Michael Mahoney11:00 - 11:30 – Haim Avron11:30 - 12:00 – Alex Gittens

Tuesday, 10:00AM - 12:00PMLocation: Bristol B

Linear Algebra, Control, and Optimization10:00 - 10:30 – Biswa Datta10:30 - 11:00 – Paul Van Dooren11:00 - 11:30 – Melvin Leok11:30 - 12:00 – Volker Mehrmann

Tuesday, 10:00AM - 12:00PMLocation: Tiverton

Contributed Session on Computational Science10:00 - 10:20 – Youngmi Hur10:20 - 10:40 – Margarida Mitjana10:40 - 11:00 – Matthias Bolten11:00 - 11:20 – Hashim Saber11:20 - 11:40 – Angeles Carmona11:40 - 12:00 – Andrey Melnikov

Tuesday, 10:00AM - 12:00PMLocation: Ocean

Matrix Methods for Polynomial Root-Finding10:00 - 10:30 – David Watkins10:30 - 11:00 – Jared Aurentz11:00 - 11:30 – Yuli Eidelman11:30 - 12:00 – Matthias Humet

Tuesday, 10:00AM - 12:00PMLocation: Patriots

Multilinear Algebra and Tensor Decompositions10:00 - 10:30 – Charles Van Loan10:30 - 11:00 – Martin Mohlenkamp11:00 - 11:30 – Manda Winlaw11:30 - 12:00 – Eugene Tyrtyshnikov

Tuesday, 12:00PM - 01:20PMLocation: Grand Foyer

Lunch - Attendees on their own - Lunch on sale in Grand Foyer

Tuesday, 01:20PM - 02:05PMLocation: Grand Ballroom

Ivan Oseledets (Institute of Numerical Mathematics, RAS)Session Chair: Misha Kilmer

10

Page 13: CMR17 - ILAS 2013

Tuesday, 02:15PM - 04:15PMLocation: Grand Ballroom

Matrices and Graph Theory2:15 - 2:45 – Minerva Catral2:45 - 3:15 – Seth Meyer3:15 - 3:45 – Adam Berliner3:45 - 4:15 – Michael Young

Tuesday, 02:15PM - 04:15PMLocation: Bristol A

Linear Complementarity Problems and Beyond2:15 - 2:45 – Richard Cottle2:45 - 3:15 – Ilan Adler3:15 - 3:45 – Jinglai Shen3:45 - 4:15 – Youngdae Kim

Tuesday, 02:15PM - 04:15PMLocation: Bristol B

Linear Algebra, Control, and Optimization2:15 - 2:45 – Michael Overton2:45 - 3:15 – Shankar Bhattacharyya3:15 - 3:45 – Melina Freitag3:45 - 4:15 – Bart Vandereycken

Tuesday, 02:15PM - 04:15PMLocation: Tiverton

Contributed Session on Numerical Linear Algebra Part 12:15 - 2:35 – Marko Petkovic2:35 - 2:55 – Jesse Barlow2:55 - 3:15 – James Lambers3:15 - 3:35 – Froilan Dopico

Tuesday, 02:15PM - 04:15PMLocation: Ocean

Matrix Methods for Polynomial Root-Finding2:15 - 2:45 – Victor Pan2:45 - 3:15 – Peter Strobach3:15 - 3:45 – Leonardo Robol3:45 - 4:15 – Raf Vandebril

Tuesday, 02:15PM - 04:15PMLocation: Patriots

Multilinear Algebra and Tensor Decompositions2:15 - 2:45 – Shmuel Friedland2:45 - 3:15 – Andre Uschmajew3:15 - 3:45 – Sergey Dolgov3:45 - 4:15 – Sergey Dolgov

Tuesday, 04:15PM - 04:45PMLocation: Grand Foyer

Coffee Break

Tuesday, 04:45PM - 06:45PMLocation: Grand Ballroom

Matrices and Graph Theory4:45 - 5:15 – Woong Kook5:15 - 5:45 – Bryan Shader5:45 - 6:15 – Louis Deaett6:15 - 6:45 – Colin Garnett

Tuesday, 04:45PM - 06:45PMLocation: Bristol A

Applications of Tropical Mathematics4:45 - 5:15 – Meisam Sharify5:15 - 5:45 – James Hook5:45 - 6:15 – Andrea Marchesini6:15 - 6:45 – Nikolai Krivulin

Tuesday, 04:45PM - 06:45PMLocation: Bristol B

Contributed Session on Numerical Linear Algebra Part 24:45 - 5:05 – Clemente Cesarano5:05 - 5:25 – Surya Prasath5:25 - 5:45 – Na Li5:45 - 6:05 – Yusaku Yamamoto6:05 - 6:25 – Luis Verde-Star

Tuesday, 04:45PM - 06:45PMLocation: Ocean

Matrix Methods for Polynomial Root-Finding4:45 - 5:15 – Luca Gemignani5:15 - 5:45 – Gianna Del Corso5:45 - 6:15 – Olga Holtz

Tuesday, 04:45PM - 06:45PMLocation: Patriots

Multilinear Algebra and Tensor Decompositions4:45 - 5:15 – Vladimir Kazeev5:15 - 5:45 – Misha Kilmer5:45 - 6:15 – Lek-Heng Lim6:15 - 6:45 – Lieven De Lathauwer

11

Page 14: CMR17 - ILAS 2013

Wednesday

Wednesday, 08:00AM - 08:45AMLocation: Grand Ballroom

Thomas Laffey (University College Dublin) (Hans Schneiderprize winner)Session Chair: Shmuel Friedland

Wednesday, 08:45AM - 09:30AMLocation: Grand Ballroom

Gilbert Strang (MIT)Session Chair: Shmuel Friedland

Wednesday, 09:30AM - 10:00AMLocation: Grand Foyer

Coffee Break

Wednesday, 10:00AM - 12:00PMLocation: Bristol A

Matrices and Orthogonal Polynomials10:00 - 10:30 – Paul Terwilliger10:30 - 11:00 – Edward Hanson11:00 - 11:30 – Antonio Duran11:30 - 12:00 – Holger Dette

Wednesday, 10:00AM - 12:00PMLocation: Bristol B

Symbolic Matrix Algorithms10:00 - 10:30 – Gilles Villard10:30 - 11:00 – Arne Storjohann11:00 - 11:30 – Wayne Eberly11:30 - 12:00 – David Saunders

Wednesday, 10:00AM - 12:00PMLocation: Tiverton

Contributed Session on Numerical Range and Spectral Prob-lems10:00 - 10:20 – Christopher Bailey10:20 - 10:40 – Yuri Nesterenko10:40 - 11:00 – Michael Karow11:00 - 11:20 – Xuhua Liu

Wednesday, 10:00AM - 12:00PMLocation: Ocean

Matrices and Graph Theory10:00 - 10:30 – Pauline van den Driessche10:30 - 11:00 – Oliver Knill11:00 - 11:30 – Steve Kirkland11:30 - 12:00 – Jason Molitierno

Wednesday, 10:00AM - 12:00PMLocation: Patriots

Structure and Randomization in Matrix Computations10:00 - 10:30 – Victor Pan10:30 - 11:00 – Yousef Saad11:00 - 11:30 – Paul Van Dooren11:30 - 12:00 – Francoise Tisseur

Wednesday, 12:00PM - 01:00PMLocation: Grand Foyer

Lunch - Attendees on their own - Lunch on sale in Grand Foyer

Wednesday, 01:00PM - 10:00PMLocation: Newport, RI

Excursion - Newport Mansion and Dinner in Downtown New-port

12

Page 15: CMR17 - ILAS 2013

Thursday

Thursday, 08:00AM - 08:45AMLocation: Grand Ballroom

Fuzhen Zhang (Nova Southeastern University)Session Chair: Vadim Olshevsky

Thursday, 08:45AM - 09:30AMLocation: Grand Ballroom

Dianne O’Leary (University of Maryland)Session Chair: Vadim Olshevsky

Thursday, 09:30AM - 10:00AMLocation: Grand Foyer

Coffee Break

Thursday, 10:00AM - 12:00PMLocation: Grand Ballroom

Matrices and Orthogonal Polynomials10:00 - 10:30 – Luc Vinet10:30 - 11:00 – Jeff Geronimo11:00 - 11:30 – Manuel Manas11:30 - 12:00 – Hugo Woerdeman

Thursday, 10:00AM - 12:00PMLocation: Bristol A

Contributed Session on Numerical Linear Algebra Part 310:00 - 10:20 – Kim Batselier10:20 - 10:40 – Xuzhou Chen10:40 - 11:00 – William Morrow11:00 - 11:20 – Pedro Freitas11:20 - 11:40 – Abderrahman Bouhamidi11:40 - 12:00 – Roel Van Beeumen

Thursday, 10:00AM - 12:00PMLocation: Bristol B

Symbolic Matrix Algorithms10:00 - 10:30 – Victor Pan10:30 - 11:00 – Brice Boyer11:00 - 11:30 – Erich Kaltofen11:30 - 12:00 – George Labahn

Thursday, 10:00AM - 12:00PMLocation: Tiverton

Contributed Session on Matrix Pencils10:00 - 10:20 – Javier Perez10:20 - 10:40 – Maria Isabel Bueno Cachadina10:40 - 11:00 – Michal Wojtylak11:00 - 11:20 – Andrii Dmytryshyn11:20 - 11:40 – Vasilije Perovic11:40 - 12:00 – Susana Furtado

Thursday, 10:00AM - 12:00PMLocation: Ocean

Matrices and Graph Theory10:00 - 10:30 – Steven Osborne10:30 - 11:00 – Steve Butler11:00 - 11:30 – T. S. Michael11:30 - 12:00 – Shaun Fallat

Thursday, 10:00AM - 12:00PMLocation: Patriots

Structure and Randomization in Matrix Computations10:00 - 10:30 – Yuli Eidelman10:30 - 11:00 – Luca Gemignani11:00 - 11:30 – Marc Baboulin11:30 - 12:00 – Thomas Mach

Thursday, 12:00PM - 01:30PMLocation: Grand Foyer

Lunch - Attendees on their own - Lunch on sale in Grand Foyer

13

Page 16: CMR17 - ILAS 2013

Thursday, 01:30PM - 03:30PMLocation: Grand Ballroom

Matrices and Orthogonal Polynomials1:30 - 2:00 – Luis Velazquez2:00 - 2:30 – Francisco Marcellan2:30 - 3:00 – Marko Huhtanen3:00 - 3:30 – Rostyslav Kozhan

Thursday, 01:30PM - 03:30PMLocation: Bristol A

Linear and Nonlinear Perron-Frobenius Theory1:30 - 2:00 – Roger Nussbaum2:00 - 2:30 – Helena Smigoc2:30 - 3:00 – Julio Moro3:00 - 3:30 – Yongdo Lim

Thursday, 01:30PM - 03:30PMLocation: Bristol B

Linear Algebra Education Issues1:30 - 2:00 – Gilbert Strang2:00 - 2:30 – Judi McDonald2:30 - 3:00 – Sandra Kingan3:00 - 3:30 – Jeffrey Stuart

Thursday, 01:30PM - 03:30PMLocation: Tiverton

Contributed Session on Nonnegative Matrices Part 11:30 - 1:50 – Keith Hooper1:50 - 2:10 – Akiko Fukuda2:10 - 2:30 – Anthony Cronin2:30 - 2:50 – Shani Jose2:50 - 3:10 – Prashant Batra3:10 - 3:30 – Pietro Paparella

Thursday, 01:30PM - 03:30PMLocation: Ocean

Contributed Session on Graph Theory1:30 - 1:50 – Caitlin Phifer1:50 - 2:10 – Milica Andelic2:10 - 2:30 – David Jacobs2:30 - 2:50 – Nathan Warnberg2:50 - 3:10 – Andres Encinas3:10 - 3:30 – Joseph Fehribach

Thursday, 01:30PM - 03:30PMLocation: Patriots

Structure and Randomization in Matrix Computations1:30 - 2:00 – Jianlin Xia2:00 - 2:30 – Ilse Ipsen2:30 - 3:00 – Victor Pan3:00 - 3:30 – Michael Mahoney

Thursday, 03:30PM - 05:30PMLocation: Grand Ballroom

ILAS Membership Business Meeting

14

Page 17: CMR17 - ILAS 2013

Thursday, 05:30PM - 07:30PMLocation: Grand Ballroom

Linear and Nonlinear Perron-Frobenius Theory5:30 - 6:00 – Marianne Akian6:00 - 6:30 – Assaf Goldberger6:30 - 7:00 – Brian Lins7:00 - 7:30 – Olga Kushel

Thursday, 05:30PM - 07:30PMLocation: Bristol A

Matrix Inequalities5:30 - 6:00 – Fuzhen Zhang6:00 - 6:30 – Takashi Sano6:30 - 7:00 – Charles Johnson7:00 - 7:30 – Tomohiro Hayashi

Thursday, 05:30PM - 07:30PMLocation: Bristol B

Structured Matrix Functions and their Applications (Dedi-cated to Leonia Lerer on the occasion of his 70th birthday)5:30 - 6:00 – Leiba Rodman6:00 - 6:30 – Gilbert Strang6:30 - 7:00 – Marinus Kaashoek

Thursday, 05:30PM - 07:30PMLocation: Tiverton

Contributed Session on Matrix Completion Problems5:30 - 5:50 – James McTigue5:50 - 6:10 – Gloria Cravo6:10 - 6:30 – Zheng QU6:30 - 6:50 – Ryan Wasson6:50 - 7:10 – Yue Liu

Thursday, 05:30PM - 07:30PMLocation: Ocean

Abstract Interpolation and Linear Algebra5:30 - 6:00 – Sanne ter Horst6:00 - 6:30 – Christopher Beattie6:30 - 7:00 – Johan Karlsson7:00 - 7:30 – Dan Volok

Thursday, 05:30PM - 07:30PMLocation: Patriots

Sign Pattern Matrices5:30 - 6:00 – Zhongshan Li6:00 - 6:30 – Judi McDonald6:30 - 7:00 – Leslie Hogben7:00 - 7:30 – Craig Erickson

15

Page 18: CMR17 - ILAS 2013

Friday

Friday, 08:00AM - 08:45AMLocation: Grand Ballroom

Dan Spielman (Yale University)Session Chair: Froilan Dopico

Friday, 08:45AM - 09:30AMLocation: Grand Ballroom

Jean Bernard Lasserre (Centre National de la Recherche Sci-entifique, France)Session Chair: Froilan Dopico

Friday, 09:30AM - 10:00AMLocation: Grand Foyer

Coffee Break

Friday, 10:00AM - 12:00PMLocation: Grand Ballroom

Matrices and Orthogonal Polynomials10:00 - 10:30 – Carl Jagels10:30 - 11:00 – Vladimir Druskin11:00 - 11:30 – Lothar Reichel11:30 - 12:00 – Raf Vandebril

Friday, 10:00AM - 12:00PMLocation: Bristol A

Matrix Inequalities10:00 - 10:30 – Tin-Yau Tam10:30 - 11:00 – Fuad Kittaneh11:00 - 11:30 – Rajesh Pereira11:30 - 12:00 – Ameur Seddik

Friday, 10:00AM - 12:00PMLocation: Bristol B

Linear Algebra Education Issues10:00 - 10:30 – David Strong10:30 - 11:00 – Rachel Quinlan11:00 - 11:30 – Megan Wawro11:30 - 12:00 – Sepideh Stewart

Friday, 10:00AM - 12:00PMLocation: Tiverton

Contributed Session on Nonnegative Matrices Part 210:00 - 10:20 – James Weaver10:20 - 10:40 – Hiroshi Kurata10:40 - 11:00 – Maguy Trefois11:00 - 11:20 – Ravindra Bapat11:20 - 11:40 – Naomi Shaked-Monderer

Friday, 10:00AM - 12:00PMLocation: Ocean

Abstract Interpolation and Linear Algebra10:00 - 10:30 – Izchak Lewkowicz10:30 - 11:00 – Quanlei Fang11:00 - 11:30 – Daniel Alpay11:30 - 12:00 – Vladimir Bolotnikov

Friday, 10:00AM - 12:00PMLocation: Patriots

Sign Pattern Matrices10:00 - 10:30 – Marina Arav10:30 - 11:00 – Wei Gao11:00 - 11:30 – Timothy Melvin

Friday, 12:00PM - 01:20PMLocation: Grand Foyer

Lunch - Attendees on their own - Lunch on sale in Grand Foyer

Friday, 01:20PM - 02:05PMLocation: Grand Ballroom

Ravi Kannan (Microsoft Research)Session Chair: Jianlin Xia

16

Page 19: CMR17 - ILAS 2013

Friday, 02:15PM - 04:15PMLocation: Grand Ballroom

Structured Matrix Functions and their Applications (Dedi-cated to Leonia Lerer on the occasion of his 70th birthday)2:15 - 2:45 – Joseph Ball2:45 - 3:15 – Alexander Sakhnovich3:15 - 3:45 – Hermann Rabe3:45 - 4:15 – Harm Bart

Friday, 02:15PM - 04:15PMLocation: Bristol A

Matrices and Total Positivity2:15 - 2:45 – Charles Johnson2:45 - 3:15 – Gianna Del Corso3:15 - 3:45 – Olga Kushel3:45 - 4:15 – Rafael Canto

Friday, 02:15PM - 04:15PMLocation: Bristol B

Linear Algebra Education Issues2:15 - 2:45 – Robert Beezer2:45 - 3:15 – Tom Edgar3:15 - 3:45 – Sang-Gu Lee3:45 - 4:15 – Jason Grout

Friday, 02:15PM - 04:15PMLocation: Ocean

Matrix Inequalities2:15 - 2:45 – Natalia Bebiano2:45 - 3:15 – Takeaki Yamazaki3:15 - 3:45 – Minghua Lin

Friday, 02:15PM - 04:15PMLocation: Patriots

Contributed Session on Positive Definite Matrices2:15 - 2:35 – Bala Rajaratnam2:35 - 2:55 – Apoorva Khare2:55 - 3:15 – Dominique Guillot

Friday, 04:15PM - 04:45PMLocation: Grand Foyer

Coffee Break

Friday, 04:45PM - 06:45PMLocation: Grand Ballroom

Structured Matrix Functions and their Applications (Dedi-cated to Leonia Lerer on the occasion of his 70th birthday)4:45 - 5:15 – Ilya Spitkovsky5:15 - 5:45 – David Kimsey5:45 - 6:15 – Hugo Woerdeman

Friday, 04:45PM - 06:45PMLocation: Bristol A

Matrices and Total Positivity4:45 - 5:15 – Jorge Delgado5:15 - 5:45 – Alvaro Barreras5:45 - 6:15 – Jose-Javier Martınez6:15 - 6:45 – Stephane Launois

Friday, 04:45PM - 06:45PMLocation: Bristol B

Matrix Methods in Computational Systems Biology andMedicine4:45 - 5:15 – Lee Altenberg5:15 - 5:45 – Natasa Durdevac5:45 - 6:15 – Marcus Weber6:15 - 6:45 – Amir Niknejad

Friday, 04:45PM - 06:45PMLocation: Tiverton

Contributed Session on Matrix Equalities and Inequalities4:45 - 5:05 – Bas Lemmens5:05 - 5:25 – Dawid Janse van Rensburg5:25 - 5:45 – Hosoo Lee5:45 - 6:05 – Jadranka Micic Hot

Friday, 04:45PM - 06:45PMLocation: Ocean

Linear Algebra Problems in Quantum Computation4:45 - 5:15 – Man-Duen Choi5:15 - 5:45 – Panayiotis Psarrakos5:45 - 6:15 – Raymond Nung-Sing Sze

Friday, 04:45PM - 06:45PMLocation: Patriots

Applications of Tropical Mathematics4:45 - 5:15 – Marianne Johnson5:15 - 5:45 – Pascal Benchimol5:45 - 6:15 – Marie MacCaig6:15 - 6:45 – Viorel Nitica

17

Page 20: CMR17 - ILAS 2013

Monday, 08:00AM - 08:45AM

Anne Greenbaum Location: Grand BallroomK-Spectral Sets and Applications

Let A be an n by n matrix or a bounded linear operator on a Hilbert space. If f : C → C is a function thatis holomorphic in a region Ω of the complex plane containing a neighborhood of the spectrum of A, thenthe matrix or operator f(A) can be defined through the Cauchy integral formula. For many years, therehas been interest in relating the 2-norm of the operator f(A) (that is, ‖f(A)‖ ≡ sup‖v‖2=1 ‖f(A)v‖2) to theL∞-norm of f on such a set Ω (that is, ‖f‖Ω ≡ supz∈Ω |f(z)|). If all rational functions f with poles outsideΩ satisfy ‖f(A)‖ 6 K‖f‖Ω, then Ω is said to be a K-spectral set for A.

The first identification of a 1-spectral set (or, just a spectral set) was by von Neumann in 1951: If A is acontraction (that is, ‖A‖ 6 1) then ‖f(A)‖ 6 ‖f‖D where D denotes the unit disk. More recently (2004), M.Crouzeix has shown that the field of values or numerical range (W (A) ≡ 〈Aq, q〉 : ‖q‖2 = 1) is a K-spectralset for A with K = 11.08, and he conjectures that K can be taken to be 2. Another K-spectral set is theǫ-pseudospectrum of A: Λǫ(A) ≡ z ∈ C : ‖(zI −A)−1‖ > ǫ−1. If L denotes the length of the boundary ofΛǫ(A), then ‖f(A)‖ 6 (L/(2πǫ))‖f‖Λǫ(A). Many other K-spectral sets can be derived for a given operatorA by noting that if Ω is a K-spectral set for B and if A = f(B), then f(Ω) is a K-spectral set for A.

In this talk, I will discuss these ideas and others, concentrating on the conjecture of Crouzeix that the field ofvalues of A is a 2-spectral set for A. A (possibly) stronger conjecture is that if g is a biholomorphic mappingfrom W (A) onto D, then g(A) is similar to a contraction via a similarity transformation with conditionnumber at most 2; that is, g(A) = XCX−1, where ‖C‖ 6 1 and κ(X) ≡ ‖X‖ · ‖X−1‖ 6 2. An advantageof this statement is that it can be checked numerically (and perhaps even proved through careful argumentsabout the accuracy of the computation) for specific matrices A. I will discuss implications of these estimatesand conjectures for various problems of applied and computational mathematics, in particular, what theycan tell us about the convergence rate of the GMRES algorithm for solving linear systems. In some cases,good bounds can be obtained even when the origin lies inside the field of values by using the fact that ifA = f(B) then f(W (B)) is a K-spectral set for A.

Monday, 08:45AM - 09:30AM

Leiba Rodman Location: Grand BallroomEigenvalue perturbation analysis of structured matrices

We study the eigenvalue perturbation theory of structured matrices under structured rank one perturbationwhich may or may not be small in norm. Structured matrices in question are defined by various symmetryproperties, such as selfadjoint or symplectic with respect to a sesquilinear, symmetric bilinear, or skew-symmetric form, over the real or complex numbers. Generic Jordan structures of the perturbed matrix areidentified in many cases. The talk is based on joint work with C. Mehl, V. Mehrmann, and A.C.M. Ran.

18

Page 21: CMR17 - ILAS 2013

Monday, 10:00AM - 12:00PM

Linear Algebra Problems in Quantum Computation Grand Ballroom10:00 - 10:30 Jinchuan Hou Quantum measurements and maps preserving strict convex combi-

nations and pure states10:30 - 11:00 Lajos Molnar Some non-linear preservers on quantum structures11:00 - 11:30 Zejun Huang Linear preserver problems arising in quantum information science11:30 - 12:00 Gergo Nagy Transformations on density matrices preserving the Holevo boundAdvances in Combinatorial Matrix Theory and its Applications Bristol A10:00 - 10:30 Carlos da Fonseca On the P-sets of acyclic matrices10:30 - 11:00 Ferenc Szollosi Large set of equiangular lines in Euclidean spaces11:00 - 11:30 Geir Dahl Majorization transforms and Ryser’s algorithm11:30 - 12:00 Judi McDonald Good Bases for Nonnegative RealizationsGeneralized Inverses and Applications Bristol B10:00 - 10:30 Minnie Catral Matrix groups constructed from R, s+ 1, k-potent matrices10:30 - 11:00 Nieves Castro-

GonzalezGeneralized inverses and extensions of the Sherman-Morrison-Woodbury formulae

11:00 - 11:30 Esther Dopazo Formulas for the Drazin inverse of block anti-triangular matricesNonlinear Eigenvalue Problems Ocean10:00 - 10:30 Francoise Tisseur A Review of Nonlinear Eigenvalue Problems10:30 - 11:00 Karl Meerbergen An overview of Krylov methods for nonlinear eigenvalue problems11:00 - 11:30 Elias Jarlebring Block iterations for eigenvector nonlinearities11:30 - 12:00 Zhaojun Bai A Pade Approximate Linearization Technique for Solving the

Quadratic Eigenvalue Problem with Low-Rank DampingLinear Least Squares Methods: Algorithms, Analysis, and Applications Patriots10:00 - 10:30 David Titley-Peloquin Stochastic conditioning of systems of equations10:30 - 11:00 Huaian Diao Structured Condition Numbers for a Linear Functional of the

Tikhonov Regularized Solution11:00 - 11:30 Sanzheng Qiao Lattice Basis Reduction: Preprocessing the Integer Least Squares

Problem11:30 - 12:00 Keiichi Morikuni Inner-iteration preconditioning for the minimum-norm solution of

rank-deficient linear systems

Monday, 01:20PM - 02:05PM

Alan Edelman Location: Grand BallroomThe Interplay of Random Matrix Theory, Numerical Linear Algebra, and Ghosts and Shadows

Back in the day, numerical linear algebra was about the construction of SVDs, GSVDs, tridiagonals, bidi-agonals, arrow matrices, Householder Reflectors, Givens rotations and so much more. In this talk, we willshow the link to classical finite random matrix theory. In particular, we show how the numerical algorithmsof yesteryear are playing a role today that we do not completely understand. What is happening is thatour favorite algorithms seem to be working not only on reals, complexes, and quaternions, but on spaces ofany arbitrary dimension, at least in a very simple formal sense. A continuing theme is that the algorithmsfrom numerical linear algebra seem as though they would be incredibly useful even if there were never anycomputers to run them on.

19

Page 22: CMR17 - ILAS 2013

Monday, 02:15PM - 04:15PM

Linear Algebra Problems in Quantum Computation Grand Ballroom2:15 - 2:45 David Kribs Private quantum subsystems and connections with quantum error

correction2:45 - 3:15 Jianxin Chen Rank reduction for the local consistency problem3:15 - 3:45 Nathaniel Johnston On the Minimum Size of Unextendible Product Bases3:45 - 4:15 Sarah Plosker On trumpingAdvances in Combinatorial Matrix Theory and its Applications Bristol A2:15 - 2:45 Gi-Sang Cheon Several types of Bessel numbers generalized and some combinatorial

setting2:45 - 3:15 Kathleen Kiernan Patterns of Alternating Sign Matrices3:15 - 3:45 Kevin Vander Meulen Companion Matrices and Spectrally Arbitrary Patterns3:45 - 4:15 Ryan Tifenbach A combinatorial approach to nearly uncoupled Markov chainsGeneralized Inverses and Applications Bristol B2:15 - 2:45 Huaian Diao On the Level-2 Condition Number for MooreCPenrose Inverse in

Hilbert Space2:45 - 3:15 Chunyuan Deng On invertibility of combinations of k-potent operators3:15 - 3:45 Dragana Cvetkovic-Ilic Reverse order law for the generalized inverses3:45 - 4:15 Dijana Mosic Representations for the generalized Drazin inverse of block matrices

in Banach algebrasContributed Session on Algebra and Matrices over Arbitrary Fields Part 1 Tiverton2:15 - 2:35 Polona Oblak On extremal matrix centralizers2:35 - 2:55 Gregor Dolinar Commutativity preserving maps via maximal centralizers3:15 - 3:35 Leo Livshits Paratransitive algebras of linear transformations and a spatial im-

plementation of “Wedderburn’s Principal Theorem”3:35 - 3:55 Peter Semrl Adjacency preserving mapsNonlinear Eigenvalue Problems Ocean2:15 - 2:45 Ion Zaballa The Inverse Symmetric Quadratic Eigenvalue Problem2:45 - 3:15 Shreemayee Bora Distance problems for Hermitian matrix polynomials3:15 - 3:45 Leo Taslaman Exploiting low rank of damping matrices using the Ehrlich-Aberth

method3:45 - 4:15 Christian Mehl Skew-symmetry - a powerful structure for matrix polynomialsKrylov Subspace Methods for Linear Systems Patriots2:15 - 2:45 Eric de Sturler The convergence behavior of BiCG2:45 - 3:15 Daniel Richmond Implicitly Restarting the LSQR Algorithm3:15 - 3:45 Andreas Stathopoulos Using ILU(0) to estimate the diagonal of the inverse of a matrix.3:45 - 4:15 Daniel Szyld MPGMRES: a generalized minimum residual method with multiple

preconditioners

20

Page 23: CMR17 - ILAS 2013

Monday, 04:45PM - 06:45PM

Linear Algebra Problems in Quantum Computation Grand Ballroom4:45 - 5:15 Tin-Yau Tam Gradient Flows for the Minimum Distance to the Sum of Adjoint

Orbits5:15 - 5:45 Diane Christine Pelejo Decomposing a Unitary Matrix into a Product of Weighted Quantum

Gates5:45 - 6:15 Justyna Zwolak Higher dimensional witnesses for entanglement in 2N–qubit chainsAdvances in Combinatorial Matrix Theory and its Applications Bristol A4:45 - 5:15 In-Jae Kim Creating a user-satisfaction index5:15 - 5:45 Charles Johnson Eigenvalues, Multiplicities and Linear TreesMatrices over Idempotent Semirings Bristol B4:45 - 5:15 Sergei Sergeev Gaussian elimination and other universal algorithms5:15 - 5:45 Jan Plavka The robustness of interval matrices in max-min algebra5:45 - 6:15 Hans Schneider Bounds of Wielandt and Schwartz for max-algebraic matrix powers6:15 - 6:45 Martin Gavalec Tolerance and strong tolerance interval eigenvectors in max-min al-

gebraContributed Session on Algebra and Matrices over Arbitrary Fields Part 2 Tiverton4:45 - 5:05 Mikhail Stroev Permanents, determinants, and generalized complementary basic

matrices5:05 - 5:25 Bart De Bruyn On trivectors and hyperplanes of symplectic dual polar spaces5:25 - 5:45 Rachel Quinlan Partial matrices of constant rank over small fields5:45 - 6:05 Yorick Hardy Uniform Kronecker QuotientsNonlinear Eigenvalue Problems Ocean4:45 - 5:15 D. Steven Mackey A Filtration Perspective on Minimal Indices5:15 - 5:45 Fernando De Teran Spectral equivalence of Matrix Polynomials, the Index Sum Theorem

and consequences5:45 - 6:15 Vanni Noferini Vector spaces of linearizations for matrix polynomials: a bivariate

polynomial approach6:15 - 6:45 Federico Poloni Duality of matrix pencils, singular pencils and linearizationsKrylov Subspace Methods for Linear Systems Patriots4:45 - 5:15 James Baglama Restarted Block Lanczos Bidiagonalization Algorithm for Finding

Singular Triplets5:15 - 5:45 Jennifer Pestana GMRES convergence bounds that depend on the right-hand side

vector5:45 - 6:15 Xiaoye Li Robust and Scalable Schur Complement Method using Hierarchical

Parallelism

21

Page 24: CMR17 - ILAS 2013

Tuesday, 08:00AM - 08:45AM

Raymond Sze Location: Grand BallroomQuantum Operation and Quantum Error Correction

Quantum operation is one of the most fundamental and important concepts which is widely used in all aspectsof quantum information theory. In the context of quantum computation, a quantum operation is also calleda quantum channel. A quantum channel can be viewed as a communication channel in a quantum systemwhich can transmit quantum information such as qubits. Mathematically, a quantum channel is a tracepreserving completely positive map between quantum state spaces with the operator sum representationρ 7→ ∑r

j=1 EjρE∗j with

∑r

j=1 E∗jEj = I.

One of the fundamental problems quantum information scientists concerned with, is whether one can designand construct a quantum device that transforms certain quantum states into other quantum states. Thistask is physically possible if a specified quantum operation (transformation) of certain prescribed sets ofinput and output states can be found. The problem then becomes to determine an existence condition of atrace preserving completely positive map sending ρj to σj for all j, for certain given sets of quantum statesρ1, . . . , ρk and σ1, . . . , σk. This is called the problem of state transformation. In this talk, recent resultson this problem will be presented.

When a quantum system interacts with the outside world, it will be vulnerable to disturbance from theexternal environment which can lead to decoherence in the system. Decoherence can be regarded as aquantum operation in which environment causes undesirable noise in the quantum system. Quantum errorcorrection is one of the strategies to flight against decoherence. In this talk, we review different errorcorrection schemes including decoherence-free subspace model, noiseless subsystem model, and operatorquantum error correction model, in an operator theoretical approach.

Tuesday, 08:45AM - 09:30AM

Maryam Fazel Location: Grand BallroomRecovery of Structured Models with Limited Information

Finding models with a low-dimensional structure given a limited number of observations is a central problemin signal processing (compressed sensing), machine learning (recommender systems), and system identifica-tion. Examples of such structured models include sparse vectors, low-rank matrices, and the sum of sparseand low-rank matrices. We begin by reviewing recent results for these structures, characterizing the numberof observations for successful model recovery.

We then examine a new problem: In many signal processing and machine learning applications, the desiredmodel has multiple structures simultaneously. Applications include sparse phase retrieval, and learningmodels with several structural priors in machine learning tasks. Often, penalties that promote individualstructures are known, and require a minimal number of generic measurements (e.g.,ℓ1 norm for sparsity,nuclear norm for matrix rank), so it is reasonable to minimize a combination of such norms. We show that,surprisingly, if we use multiobjective optimization with the individual norms, we can do no better (order-wise) than an algorithm that exploits only one of the structures. This result holds in a general setting andsuggests that to fully exploit the multiple structures, we need an entirely new convex relaxation. We alsodiscuss the related problem of ‘denoising’ for simultaneously structured signals corrupted by additive noise.

22

Page 25: CMR17 - ILAS 2013

Tuesday, 10:00AM - 12:00PM

Linear Algebra Problems in Quantum Computation Grand Ballroom10:00 - 10:30 Edward Poon On maximally entangled states10:30 - 11:00 Seung-Hyeok Kye Facial structures for bipartite separable states and applications to

the separability criterion11:00 - 11:30 Shmuel Friedland On separable and entangled states in QIS11:30 - 12:00 Xiaofei Qi Constructing optimal entanglement witnesses by permutationsRandomized Matrix Algorithms Bristol A10:00 - 10:30 Ilse Ipsen Accuracy of Leverage Score Estimation10:30 - 11:00 Michael Mahoney Implementing Randomized Matrix Algorithms in Parallel and Dis-

tributed Environments11:00 - 11:30 Haim Avron A Randomized Asynchronous Linear Solver with Provable Conver-

gence Rate11:30 - 12:00 Alex Gittens Randomized low-rank approximations, in theory and practiceLinear Algebra, Control, and Optimization Bristol B10:00 - 10:30 Biswa Datta A Hybrid Method for Time-Delayed Robust and Minimum Norm

Quadratic Partial Eigenvalue Assignment10:30 - 11:00 Paul Van Dooren On finding the nearest stable system11:00 - 11:30 Melvin Leok Variational Integrators for Matrix Lie Groups with Applications to

Geometric Control11:30 - 12:00 Volker Mehrmann Optimal control of descriptor systemsContributed Session on Computational Science Tiverton10:00 - 10:20 Youngmi Hur A New Algebraic Approach to the Construction of Multidimensional

Wavelet Filter Banks10:20 - 10:40 Margarida Mitjana Green matrices associated with Generalized Linear Polyominoes10:40 - 11:00 Matthias Bolten Multigrid methods for high-dimensional problems with tensor struc-

ture11:00 - 11:20 Hashim Saber AModel Reduction Algorithm for Simulating Sedimentation Velocity

Analysis11:20 - 11:40 Angeles Carmona Discrete Serrin’s Problem11:40 - 12:00 Andrey Melnikov A new method for solving completely integrable PDEsMatrix Methods for Polynomial Root-Finding Ocean10:00 - 10:30 David Watkins Fast computation of eigenvalues of companion, comrade, and related

matrices10:30 - 11:00 Jared Aurentz A Factorization of the Inverse of the Shifted Companion Matrix11:00 - 11:30 Yuli Eidelman Multiplication of matrices via quasiseparable generators and eigen-

value iterations for block triangular matrices11:30 - 12:00 Matthias Humet A generalized companion method to solve systems of polynomialsMultilinear Algebra and Tensor Decompositions Patriots10:00 - 10:30 Charles Van Loan Properties of the Higher-Order Generalized Singular Value Decom-

position10:30 - 11:00 Martin Mohlenkamp Wrestling with Alternating Least Squares11:00 - 11:30 Manda Winlaw A Preconditioned Nonlinear Conjugate Gradient Algorithm for

Canonical Tensor Decomposition11:30 - 12:00 Eugene Tyrtyshnikov Tensor decompositions and optimization problems

Tuesday, 01:20PM - 02:05PM

Ivan Oseledets Location: Grand BallroomNumerical tensor methods and their applications

TBA

23

Page 26: CMR17 - ILAS 2013

Tuesday, 02:15PM - 04:15PM

Matrices and Graph Theory Grand Ballroom2:15 - 2:45 Minerva Catral Zero Forcing Number and Maximum Nullity of Subdivided Graphs2:45 - 3:15 Seth Meyer Minimum rank for circulant graphs3:15 - 3:45 Adam Berliner Minimum rank, maximum nullity, and zero forcing number of simple

digraphs3:45 - 4:15 Michael Young Generalized Propagation Time of Zero ForcingLinear Complementarity Problems and Beyond Bristol A2:15 - 2:45 Richard Cottle Three Solution Properties of a Class of LCPs: Sparsity, Elusiveness,

and Strongly Polynomial Computability2:45 - 3:15 Ilan Adler The central role of bimatrix games and LCPs with P-matrices in the

computational analysis of Lemke-resolvable LCPs3:15 - 3:45 Jinglai Shen Uniform Lipschitz property of spline estimation of shape constrained

functions3:45 - 4:15 Youngdae Kim A Pivotal Method for Affine Variational Inequalities (PATHAVI)Linear Algebra, Control, and Optimization Bristol B2:15 - 2:45 Michael Overton Stability Optimization for Polynomials and Matrices2:45 - 3:15 Shankar Bhat-

tacharyyaLinear Systems: A Measurement Based Approach

3:15 - 3:45 Melina Freitag Calculating the H∞-norm using the implicit determinant method.3:45 - 4:15 Bart Vandereycken Subspace methods for computing the pseudospectral abscissa and

the stability radiusContributed Session on Numerical Linear Algebra Part 1 Tiverton2:15 - 2:35 Marko Petkovic Gauss-Jordan elimination method for computing outer inverses2:35 - 2:55 Jesse Barlow Block Gram-Schmidt Downdating2:55 - 3:15 James Lambers Explicit high-order time stepping based on componentwise applica-

tion of asymptotic block Lanczos iteration3:15 - 3:35 Froilan Dopico Structured eigenvalue condition numbers for parameterized quasisep-

arable matricesMatrix Methods for Polynomial Root-Finding Ocean2:15 - 2:45 Victor Pan Real and Complex Polynomial Root-finding by Means of Eigen-

solving2:45 - 3:15 Peter Strobach The Fitting of Convolutional Models for Complex Polynomial Root

Refinement3:15 - 3:45 Leonardo Robol Solving secular and polynomial equations: a multiprecision algo-

rithm3:45 - 4:15 Raf Vandebril Companion pencil vs. Companion matrix: Can we ameliorate the

accuracy when computing roots?Multilinear Algebra and Tensor Decompositions Patriots2:15 - 2:45 Shmuel Friedland The number of singular vector tuples and the uniqueness of best

(r1, . . . , rd) approximation of tensors2:45 - 3:15 Andre Uschmajew On convergence of ALS and MBI for approximation in the TT format3:15 - 3:45 Sergey Dolgov Alternating minimal energy methods for linear systems in higher

dimensions. Part I: SPD systems.3:45 - 4:15 Sergey Dolgov Alternating minimal energy methods for linear systems in higher di-

mensions. Part II: Faster algorithm and application to nonsymmetricsystems

24

Page 27: CMR17 - ILAS 2013

Tuesday, 04:45PM - 06:45PM

Matrices and Graph Theory Grand Ballroom4:45 - 5:15 Woong Kook Logarithmic Tree Numbers for Acyclic Complexes5:15 - 5:45 Bryan Shader The λ-τ problem for symmetric matrices with a given graph5:45 - 6:15 Louis Deaett The rank of a positive semidefinite matrix and cycles in its graph6:15 - 6:45 Colin Garnett Refined Inertias of Tree Sign PatternsApplications of Tropical Mathematics Bristol A4:45 - 5:15 Meisam Sharify Computation of the eigenvalues of a matrix polynomial by means of

tropical algebra5:15 - 5:45 James Hook Tropical eigenvalues5:45 - 6:15 Andrea Marchesini Tropical bounds for eigenvalues of matrices6:15 - 6:45 Nikolai Krivulin Extremal properties of tropical eigenvalues and solutions of tropical

optimization problemsContributed Session on Numerical Linear Algebra Part 2 Bristol B4:45 - 5:05 Clemente Cesarano Generalized Chebyshev polynomials5:05 - 5:25 Surya Prasath On Moore-Penrose Inverse based Image Restoration5:25 - 5:45 Na Li CP decomposition of Partial symmetric tensors5:45 - 6:05 Yusaku Yamamoto An Algorithm for the Nonlinear Eigenvalue Problem based on the

Contour Integral6:05 - 6:25 Luis Verde-Star Construction of all discrete orthogonal and q-orthogonal classical

polynomial sequences using infinite matrices and Pincherle deriva-tives

Matrix Methods for Polynomial Root-Finding Ocean4:45 - 5:15 Luca Gemignani The Ehrlich-Aberth method for structured eigenvalue refinement5:15 - 5:45 Gianna Del Corso A condensed representation of generalized companion matrices5:45 - 6:15 Olga Holtz Generalized Hurwitz matrices and forbidden sectors of the complex

planeMultilinear Algebra and Tensor Decompositions Patriots4:45 - 5:15 Vladimir Kazeev From tensors to wavelets and back5:15 - 5:45 Misha Kilmer Tensor-based Regularization for Multienergy X-ray CT Reconstruc-

tion5:45 - 6:15 Lek-Heng Lim Multiplying higher-order tensors6:15 - 6:45 Lieven De Lathauwer Between linear and nonlinear: numerical computation of tensor de-

compositions

25

Page 28: CMR17 - ILAS 2013

Wednesday, 08:00AM - 08:45AM

Thomas Laffey Location: Grand BallroomCharacterizing the spectra of nonnegative matrices

Given a list of complex numbers σ := (λ1, . . . ..., λn), the nonnegative inverse eigenvalue problem (NIEP)asks for necessary and sufficient conditions for σ to be the list of eigenvalues of an entry-wise nonnegativematrix. If σ has this property, we say that it is realizable.

We will discuss the current status of research on this problem. We will also discuss the related problem, firststudied by Boyle and Handelman in their celebrated work, of the realizability of σ with an arbitrary numberof zeros added.

With a particular emphasis on the properties of polynomials and power series constructed from σ, we willpresent some new results, obtained jointly with R. Loewy and H. Smigoc, on constructive approaches to theNIEP, and some related observations of F. Holland.

For σ above, write f(x) = Πnk=1(x − λk). For realizable σ, V. Monov has proved interesting results on the

realizability of the list of roots of the derivative f ′(x) and raised some intriguing questions. We will reportsome progress on these obtained jointly with my student Anthony Cronin.

Wednesday, 08:45AM - 09:30AM

Gilbert Strang Location: Grand BallroomFunctions of Difference Matrices

The second difference matrix K is tridiagonal with entries 1,−2, 1 down the diagonals. We can approximatethe heat equation ut = uxx on an interval by ut = Ku/h2. The solution involves E = exp(Kt) and we take acloser look at this matrix and related matrices for the wave equation: C = cos(

√−Kt) and S = sinc(

√−Kt).

Sinc was a surprise.

The eigenvectors of K, E, C, S are sine vectors. So Eij becomes a combination of sines times sines, weightedby eigenvalues of E. Rewriting in terms of sin((i−j)kh) and sin((i+j)kh), this matrix E has a shift–invariantpart (Toeplitz matrix): shift the initial function and the solution shifts. There is also a Hankel part thatshifts the other way! We hope to explain this.

A second step is to find a close approximation to these sums by integrals. With smooth periodic functionsthe accuracy is superexponential. The integrals produce Bessel functions in the finite difference solutions tothe heat equation and the wave equation. There are applications to graphs and networks.

26

Page 29: CMR17 - ILAS 2013

Wednesday, 10:00AM - 12:00PM

Matrices and Orthogonal Polynomials Bristol A10:00 - 10:30 Paul Terwilliger Leonard pairs and the q-Tetrahedron algebra10:30 - 11:00 Edward Hanson The tail condition for Leonard pairs11:00 - 11:30 Antonio Duran Wronskian type determinants of orthogonal polynomials, Selberg

type formulas and constant term identities11:30 - 12:00 Holger Dette Optimal designs, orthogonal polynomials and random matricesSymbolic Matrix Algorithms Bristol B10:00 - 10:30 Gilles Villard Euclidean lattice basis reduction: algorithms and experiments for

disclosing integer relations10:30 - 11:00 Arne Storjohann Computing the invariant structure of integer matrices: fast algo-

rithms into practice11:00 - 11:30 Wayne Eberly Early Termination over Small Fields: A Consideration of the Block

Case11:30 - 12:00 David Saunders Wanted: reconditioned preconditionersContributed Session on Numerical Range and Spectral Problems Tiverton10:00 - 10:20 Christopher Bailey An exact method for computing the minimal polynomial10:20 - 10:40 Yuri Nesterenko Matrix canonical forms under unitary similarity transformations.

Geometric approach.10:40 - 11:00 Michael Karow Inclusion theorems for pseudospectra of block triangular matrices11:00 - 11:20 Xuhua Liu Connectedness, Hessians and Generalized Numerical RangesMatrices and Graph Theory Ocean10:00 - 10:30 Pauline van den

DriesscheA Simplified Principal Minor Assignment Problem

10:30 - 11:00 Oliver Knill The Dirac operator of a finite simple graph11:00 - 11:30 Steve Kirkland The Minimum Coefficient of Ergodicity for a Markov Chain with a

Specified Directed Graph11:30 - 12:00 Jason Molitierno The Algebraic Connectivity of Planar GraphsStructure and Randomization in Matrix Computations Patriots10:00 - 10:30 Victor Pan Transformations of Matrix Structures Work Again10:30 - 11:00 Yousef Saad Multilevel low-rank approximation preconditioners11:00 - 11:30 Paul Van Dooren On solving indefinite least squares-type problems via anti-triangular

factorization11:30 - 12:00 Francoise Tisseur Structured matrix polynomials and their sign characteristic

27

Page 30: CMR17 - ILAS 2013

Thursday, 08:00AM - 08:45AM

Fuzhen Zhang Location: Grand BallroomGeneralized Matrix Functions

A generalized matrix function is a mapping from the matrix space to a number field associated with asubgroup of the permutation group and an irreducible character of the subgroup. Determinants and per-manents are special cases of generalized matrix functions. Starting with a brief survey of certain elegantexisting theorems, this talk will present some new ideas and results in the area.

Thursday, 08:45AM - 09:30AM

Dianne O’Leary Location: Grand BallroomEckart-Young Meets Bayes: Optimal Regularized Low Rank Inverse Approximation

We consider the problem of finding approximate inverses of a given matrix. We show that a regularizedFrobenius norm inverse approximation problem arises naturally in a Bayes risk framework, and we derivethe solution to this problem. We then focus on low-rank regularized approximate inverses and obtain aninverse regularized Eckart-Young-like theorem. Numerical experiments validate our results and show whatcan be gained over the use of sub-optimal approximations such as truncated SVD matrices.

28

Page 31: CMR17 - ILAS 2013

Thursday, 10:00AM - 12:00PM

Matrices and Orthogonal Polynomials Grand Ballroom10:00 - 10:30 Luc Vinet Harmonic oscillators and multivariate Krawtchouk polynomials10:30 - 11:00 Jeff Geronimo Matrix Orthogonal Polynomials in Bivariate Orthogonal Polynomi-

als.11:00 - 11:30 Manuel Manas Scalar and Matrix Orthogonal Laurent Polynomials in the Unit Cir-

cle and Toda Type Integrable Systems11:30 - 12:00 Hugo Woerdeman Norm-constrained determinantal representations of polynomialsContributed Session on Numerical Linear Algebra Part 3 Bristol A10:00 - 10:20 Kim Batselier A Geometrical Approach to Finding Multivariate Approximate

LCMs and GCDs10:20 - 10:40 Xuzhou Chen The Stationary Iterations Revisited10:40 - 11:00 William Morrow Matrix-Free Methods for Verifying Constrained Positive Definiteness

and Computing Directions of Negative Curvature11:00 - 11:20 Pedro Freitas Derivatives of Functions of Matrices11:20 - 11:40 Abderrahman

BouhamidiGlobal Krylov subspace methods for computing the meshless elasticpolyharmonic splines

11:40 - 12:00 Roel Van Beeumen A practical rational Krylov algorithm for solving large-scale nonlin-ear eigenvalue problems

Symbolic Matrix Algorithms Bristol B10:00 - 10:30 Victor Pan Polynomial Evaluation and Interpolation: Fast and Stable Approxi-

mate Solution10:30 - 11:00 Brice Boyer Reducing memory consumption in Strassen-like matrix multiplica-

tion11:00 - 11:30 Erich Kaltofen Outlier detection by error correcting decoding and structured linear

algebra methods11:30 - 12:00 George Labahn Applications of fast nullspace computation for polynomial matricesContributed Session on Matrix Pencils Tiverton10:00 - 10:20 Javier Perez New bounds for roots of polynomials from Fiedler companion matri-

ces10:20 - 10:40 Maria Isabel Bueno

CachadinaSymmetric Fiedler Pencils with Repetition as Strong Linearizationsfor Symmetric Matrix Polynomials.

10:40 - 11:00 Michal Wojtylak Perturbations of singular hermitian pencils.11:00 - 11:20 Andrii Dmytryshyn Orbit closure hierarchies of skew-symmetric matrix pencils11:20 - 11:40 Vasilije Perovic Linearizations of Matrix Polynomials in Bernstein Basis11:40 - 12:00 Susana Furtado Skew-symmetric Strong Linearizations for Skew-symmetric Matrix

Polynomials obtained from Fiedler Pencils with RepetitionMatrices and Graph Theory Ocean10:00 - 10:30 Steven Osborne Using the weighted normalized Laplacian to construct cospectral un-

weighted bipartite graphs10:30 - 11:00 Steve Butler Equitable partitions and the normalized Laplacian matrix11:00 - 11:30 T. S. Michael Matrix Ranks in Graph Theory11:30 - 12:00 Shaun Fallat On the minimum number of distinct eigenvalues of a graphStructure and Randomization in Matrix Computations Patriots10:00 - 10:30 Yuli Eidelman The bisection method and condition numbers for quasiseparable of

order one matrices10:30 - 11:00 Luca Gemignani The unitary eigenvalue problem11:00 - 11:30 Marc Baboulin Accelerating linear system solutions using randomization11:30 - 12:00 Thomas Mach Inverse eigenvalue problems linked to rational Arnoldi, and rational

(non)symmetric Lanczos

29

Page 32: CMR17 - ILAS 2013

Thursday, 01:30PM - 03:30PM

Matrices and Orthogonal Polynomials Grand Ballroom1:30 - 2:00 Luis Velazquez A quantum approach to Khrushchev’s formula2:00 - 2:30 Francisco Marcellan CMV matrices and spectral transformations of measures2:30 - 3:00 Marko Huhtanen Biradial orthogonal polynomials and Complex Jacobi matrices3:00 - 3:30 Rostyslav Kozhan Spectral theory of exponentially decaying perturbations of periodic

Jacobi matricesLinear and Nonlinear Perron-Frobenius Theory Bristol A1:30 - 2:00 Roger Nussbaum Generalizing the Krein-Rutman Theorem

2:00 - 2:30 Helena Smigoc Constructing new spectra of nonnegative matrices from known spec-tra of nonnegative matrices

2:30 - 3:00 Julio Moro The compensation game and the real nonnegative inverse eigenvalueproblem

3:00 - 3:30 Yongdo Lim Hamiltonian actions on the cone of positive definite matricesLinear Algebra Education Issues Bristol B1:30 - 2:00 Gilbert Strang The Fundamental Theorem of Linear Algebra2:00 - 2:30 Judi McDonald The Role of Technology in Linear Algebra Education2:30 - 3:00 Sandra Kingan Linear algebra applications to data sciences3:00 - 3:30 Jeffrey Stuart Twenty years after The LACSG report, What happens in our text-

books and in our classrooms?Contributed Session on Nonnegative Matrices Part 1 Tiverton1:30 - 1:50 Keith Hooper An Iterative Method for detecting Semi-positive Matrices1:50 - 2:10 Akiko Fukuda An extension of the dqds algorithm for totally nonnegative matrices

and the Newton shift2:10 - 2:30 Anthony Cronin The nonnegative inverse eigenvalue problem2:30 - 2:50 Shani Jose On Inverse-Positivity of Sub-direct Sums of Matrices2:50 - 3:10 Prashant Batra About negative zeros of entire functions, totally non-negative matri-

ces and positive definite quadratic forms3:10 - 3:30 Pietro Paparella Matrix roots of positive and eventually positive matrices.Contributed Session on Graph Theory Ocean1:30 - 1:50 Caitlin Phifer The Cycle Intersection Matrix and Applications to Planar Graphs1:50 - 2:10 Milica Andelic A characterization of spectrum of a graph and of its vertex deleted

subgraphs2:10 - 2:30 David Jacobs Locating eigenvalues of graphs2:30 - 2:50 Nathan Warnberg Positive Semidefinite Propagation Time2:50 - 3:10 Andres Encinas Perturbations of Discrete Elliptic operators3:10 - 3:30 Joseph Fehribach Matrices and their Kirchhoff GraphsStructure and Randomization in Matrix Computations Patriots1:30 - 2:00 Jianlin Xia Randomized and matrix-free structured sparse direct solvers2:00 - 2:30 Ilse Ipsen Randomly sampling from orthonormal matrices: Coherence and

Leverage Scores2:30 - 3:00 Victor Pan On the Power of Multiplication by Random Matrices3:00 - 3:30 Michael Mahoney Decoupling randomness and vector space structure leads to high-

quality randomized matrix algorithms

30

Page 33: CMR17 - ILAS 2013

Thursday, 05:30PM - 07:30PM

Linear and Nonlinear Perron-Frobenius Theory Grand Ballroom5:30 - 6:00 Marianne Akian Policy iteration algorithm for zero-sum two player stochastic games:

complexity bounds involving nonlinear spectral radii6:00 - 6:30 Assaf Goldberger Infimum over certain conjugated operator norms as a generalization

of the Perron–Frobenius eigenvalue6:30 - 7:00 Brian Lins Denjoy-Wolff Type Theorems on Cones7:00 - 7:30 Olga Kushel Generalized Perron-Frobenius property and positivity of matrix spec-

traMatrix Inequalities Bristol A5:30 - 6:00 Fuzhen Zhang Some Inequalities involving Majorization6:00 - 6:30 Takashi Sano Kwong matrices and related topics6:30 - 7:00 Charles Johnson Monomial Inequalities for p-Newton Matrices and Beyond7:00 - 7:30 Tomohiro Hayashi An order-like relation induced by the Jensen inequalityStructured Matrix Functions and their Applications (Dedicated to Leonia Lerer on the occasionof his 70th birthday) Bristol B5:30 - 6:00 Leiba Rodman One sided invertibility, corona problems, and Toeplitz operators6:00 - 6:30 Gilbert Strang Toeplitz plus Hankel (and Alternating Hankel)6:30 - 7:00 Marinus Kaashoek An overview of Leonia Lerer’s workContributed Session on Matrix Completion Problems Tiverton5:30 - 5:50 James McTigue Partial matrices whose completions all have the same rank5:50 - 6:10 Gloria Cravo Matrix Completion Problems6:10 - 6:30 Zheng QU Dobrushin ergodicity coefficient for Markov operators on cones, and

beyond6:30 - 6:50 Ryan Wasson The Normal Defect of Some Classes of Matrices6:50 - 7:10 Yue Liu Ray Nonsingularity of Cycle Chain MatricesAbstract Interpolation and Linear Algebra Ocean5:30 - 6:00 Sanne ter Horst Rational matrix solutions to the Leech equation: The Ball-Trent

approach revisited6:00 - 6:30 Christopher Beattie Data-driven Interpolatory Model Reduction6:30 - 7:00 Johan Karlsson Uncertainty and Sensitivity of Analytic Interpolation Methods in

Spectral Analysis7:00 - 7:30 Dan Volok Multiscale stochastic processes: prediction and covariance extensionSign Pattern Matrices Patriots5:30 - 6:00 Zhongshan Li Sign Patterns with minimum rank 2 and upper bounds on minimum

ranks6:00 - 6:30 Judi McDonald Properties of Skew-Symmetric Adjacency Matrices6:30 - 7:00 Leslie Hogben Sign patterns that require or allow generalizations of nonnegativity7:00 - 7:30 Craig Erickson Sign patterns that require eventual exponential nonnegativity

31

Page 34: CMR17 - ILAS 2013

Friday, 08:00AM - 08:45AM

Dan Spielman Location: Grand BallroomNearly optimal algorithms for solving linear equations in SDD matrices.

We now have algorithms that solve systems of linear equations in symmetric, diagonally dominant matriceswith m non-zero entries in time essentially O(m log m). These algorithms require no assumptions on thelocations or values of these non-zero entries.

I will explain how these algorithms work, focusing on the algorithms of Koutis, Miller and Peng and ofKelner, Orecchia, Sidford, and Zhu.

Friday, 08:45AM - 09:30AM

Jean Bernard Lasserre Location: Grand BallroomTractable characterizations of nonnegativity on a closed set via Linear Matrix Inequalities

Tractable characterizations of polynomials (and even semi-algebraic functions) which are nonnegative on aset, is a topic of independent interest in Mathematics but is also of primary importance in many importantapplications, and notably in global optimization.

We will review two kinds of tractable characterizations of polynomials which are nonnegative on a basic closedsemi-algebraic set K ⊂ Rn. Remarkably, both characterizations are through Linear Matrix Inequalities andcan be checked by solving a hierarchy of semidefinite programs or generalized eigenvalue problems.

The first type of characterization is when knowledge on K is through its defining polynomials, i.e., K =x : gj(x) > 0, j = 1, . . . ,m, in which case some powerful certificates of positivity can be stated in termsof some sums of squares (SOS)-weighted representation. For instance, in global optimization this allows todefine a hierarchy fo semidefinite relaxations which yields a monotone sequence of lower bounds convergingto the global optimum (and in fact, finite convergence is generic).

Another (dual) way of looking at nonnegativity is when knowledge on K is through moments of a measurewhose support is K. In this case, checking whether a polynomial is nonnegative on K reduces to solvinga sequence of generalized eigenvalue problems associated with a countable (nested) family of real symmet-ric matrices of increasing size. When applied in global optimization over K, this results in a monotonesequence of upper bounds converging to the global minimum, which complements the previous sequence oflower bounds. These two (dual) characterizations provide convex inner (resp. outer) approximations (byspectrahedra) of the convex cone of polynomials nonnegative on K.

32

Page 35: CMR17 - ILAS 2013

Friday, 10:00AM - 12:00PM

Matrices and Orthogonal Polynomials Grand Ballroom10:00 - 10:30 Carl Jagels Laurent orthogonal polynomials and a QR algorithm for pentadiag-

onal recursion matrices10:30 - 11:00 Vladimir Druskin Finite-difference Gaussian rules11:00 - 11:30 Lothar Reichel Rational Krylov methods and Gauss quadrature11:30 - 12:00 Raf Vandebril A general matrix framework to predict short recurrences linked to

(extended) Krylov spacesMatrix Inequalities Bristol A10:00 - 10:30 Tin-Yau Tam On Ky Fan’s result on eigenvalues and real singular values of a matrix10:30 - 11:00 Fuad Kittaneh A numerical radius inequality involving the generalized Aluthge

transform11:00 - 11:30 Rajesh Pereira Two Conjectured Inequalities motivated by Quantum Structures.11:30 - 12:00 Ameur Seddik Characterizations of some distinguished subclasses of normal opera-

tors by operator inequalitiesLinear Algebra Education Issues Bristol B10:00 - 10:30 David Strong Good first day examples in linear algebra10:30 - 11:00 Rachel Quinlan A proof evaluation exercise in an introductory linear algebra course11:00 - 11:30 Megan Wawro Designing instruction that builds on student reasoning in linear al-

gebra: An example from span and linear independence11:30 - 12:00 Sepideh Stewart Teaching Linear Algebra in three Worlds of Mathematical ThinkingContributed Session on Nonnegative Matrices Part 2 Tiverton10:00 - 10:20 James Weaver Nonnegative Eigenvalues and Total Orders10:20 - 10:40 Hiroshi Kurata Some monotonicity results for Euclidean distance matrices10:40 - 11:00 Maguy Trefois Binary factorizations of the matrix of all ones11:00 - 11:20 Ravindra Bapat Product distance matrix of a tree with matrix weights11:20 - 11:40 Naomi Shaked-

MondererOn the CP-rank and the DJL Conjecture

Abstract Interpolation and Linear Algebra Ocean10:00 - 10:30 Izchak Lewkowicz “Wrong” side interpolation by positive real rational functions10:30 - 11:00 Quanlei Fang Potapov’s fundamental matrix inequalities and interpolation prob-

lems11:00 - 11:30 Daniel Alpay Quaternionic inner product spaces and interpolation of Schur multi-

pliers for the slice hyperholomorphic functions11:30 - 12:00 Vladimir Bolotnikov Interpolation by contractive multipliers from Hardy space to

weighted Hardy spaceSign Pattern Matrices Patriots10:00 - 10:30 Marina Arav Sign Vectors and Duality in Rational Realization of the Minimum

Rank10:30 - 11:00 Wei Gao Sign patterns with minimum rank 311:00 - 11:30 Timothy Melvin Using the Nilpotent-Jacobian Method Over Various Fields and

Counterexamples to the Superpattern Conjecture.

33

Page 36: CMR17 - ILAS 2013

Friday, 01:20PM - 02:05PM

Ravi Kannan Location: Grand BallroomRandomized Matrix Algorithms

A natural approach to getting a low-rank approximation to a large matrix is to compute with a randomlysampled sub-matrix. Initial theorems gave error bounds if we use sampling probabilities proportional tosquared length of the rows. More sophisticated sampling approaches with probabilities based on leveragescores or volumes of simplices spanned by k–tuples of rows have led to proofs of better error bounds.

Recently, sparse random subspace embeddings have provided elegant solutions to low-rank approximationsas well as linear regression problems. These algorithms fit into a computational model referred to as theStreaming Model and its multi-pass versions. Also, known theorems assert that not much improvement ispossible in these models.

But, more recently, a Cloud Computing model has been shown to be more powerful for these problemsyielding newer algorithms. The talk will survey some of these developments.

Friday, 02:15PM - 04:15PM

Structured Matrix Functions and their Applications (Dedicated to Leonia Lerer on the occasionof his 70th birthday) Grand Ballroom2:15 - 2:45 Joseph Ball Structured singular-value analysis and structured Stein inequalities2:45 - 3:15 Alexander Sakhnovich Discrete Dirac systems and structured matrices3:15 - 3:45 Hermann Rabe Norm asymptotics for a special class of Toeplitz generated matrices

with perturbations3:45 - 4:15 Harm Bart Zero sums of idempotents and Banach algebras failing to be spec-

trally regularMatrices and Total Positivity Bristol A2:15 - 2:45 Charles Johnson The Totally Positive (Nonnegative) Completion Problem and Other

Recent Work2:45 - 3:15 Gianna Del Corso qd-type methods for totally nonnegative quasiseparable matrices3:15 - 3:45 Olga Kushel Generalized total positivity and eventual properties of matrices3:45 - 4:15 Rafael Canto Quasi-LDU factorization of totally nonpositive matrices.Linear Algebra Education Issues Bristol B2:15 - 2:45 Robert Beezer A Modern Online Linear Algebra Textbook2:45 - 3:15 Tom Edgar Flipping the Technology in Linear Algebra3:15 - 3:45 Sang-Gu Lee Interactive Mobile Linear Algebra with Sage3:45 - 4:15 Jason Grout The Sage Cell server and other technology in teachingMatrix Inequalities Ocean2:15 - 2:45 Natalia Bebiano Tsallis entropies and matrix trace inequalities in Quantum Statistical

Mechanics2:45 - 3:15 Takeaki Yamazaki On some matrix inequalities involving matrix geometric mean of sev-

eral matrices3:15 - 3:45 Minghua Lin Fischer type determinantal inequalities for accretive-dissipative ma-

tricesContributed Session on Positive Definite Matrices Patriots2:15 - 2:35 Bala Rajaratnam Regularization of positive definite matrices: Connections between

linear algebra, graph theory, and statistics2:35 - 2:55 Apoorva Khare Sparse positive definite matrices, graphs, and absolutely monotonic

functions2:55 - 3:15 Dominique Guillot Preserving low rank positive semidefinite matrices

34

Page 37: CMR17 - ILAS 2013

Friday, 04:45PM - 06:45PM

Structured Matrix Functions and their Applications (Dedicated to Leonia Lerer on the occasionof his 70th birthday) Grand Ballroom4:45 - 5:15 Ilya Spitkovsky On almost normal matrices5:15 - 5:45 David Kimsey Trace formulas for truncated block Toeplitz operators5:45 - 6:15 Hugo Woerdeman Determinantal representations of stable polynomialsMatrices and Total Positivity Bristol A4:45 - 5:15 Jorge Delgado Accurate and fast algorithms for some totally nonnegative matrices

arising in CAGD5:15 - 5:45 Alvaro Barreras Computations with matrices with signed bidiagonal decomposition5:45 - 6:15 Jose-Javier Martınez More accurate polynomial least squares fitting by using the Bernstein

basis6:15 - 6:45 Stephane Launois Deleting derivations algorithm and TNN matricesMatrix Methods in Computational Systems Biology and Medicine Bristol B4:45 - 5:15 Lee Altenberg Two-Fold Irreducible Matrices: A New Combinatorial Class with

Applications to Random Multiplicative Growth5:15 - 5:45 Natasa Durdevac Understanding real-world systems using random-walk-based ap-

proaches for analyzing networks5:45 - 6:15 Marcus Weber Improved molecular simulation strategies using matrix analysis6:15 - 6:45 Amir Niknejad Stochastic Dimension Reduction Via Matrix Factorization in Ge-

nomics and ProteomicsContributed Session on Matrix Equalities and Inequalities Tiverton4:45 - 5:05 Bas Lemmens Geometry of Hilbert’s and Thompson’s metrics on cones5:05 - 5:25 Dawid Janse van Rens-

burgRank One Perturbations of H-Positive Real Matrices

5:25 - 5:45 Hosoo Lee Sylvester equation and some special applications5:45 - 6:05 Jadranka Micic Hot Refining some inequalities involving quasi–arithmetic meansLinear Algebra Problems in Quantum Computation Ocean4:45 - 5:15 Man-Duen Choi Who’s Afraid of Quantum Computers?5:15 - 5:45 Panayiotis Psarrakos An envelope for the spectrum of a matrix5:45 - 6:15 Raymond Nung-Sing

SzeA Linear Algebraic Approach in Constructing Quantum Error Cor-rection Code

Applications of Tropical Mathematics Patriots4:45 - 5:15 Marianne Johnson Tropical matrix semigroups5:15 - 5:45 Pascal Benchimol Tropicalizing the simplex algorithm5:45 - 6:15 Marie MacCaig Integrality in max-algebra6:15 - 6:45 Viorel Nitica Generating sets of tropical hemispaces

35

Page 38: CMR17 - ILAS 2013

Minisymposia

Abstract Interpolation and Linear Algebra

Organizer(s): Joseph Ball, Vladimir Bolotnikov

Thursday 05:30PM – 07:30PM OceanFriday 10:00AM – 12:00PM Ocean

Advances in Combinatorial Matrix Theoryand its Applications

Organizer(s): Carlos Fonseca, Geir Dahl

Monday 10:00AM – 12:00PM Bristol AMonday 02:15PM – 04:15PM Bristol AMonday 04:45PM – 06:45PM Bristol A

Applications of Tropical Mathematics

Organizer(s): James Hook, Marianne Johnson,Sergei Sergeev

Tuesday 04:45PM – 06:45PM Bristol AFriday 04:45PM – 06:45PM Patriots

Generalized Inverses and Applications

Organizer(s): Minerva Catral, Nestor Thome,Wimin Wei

Monday 10:00AM – 12:00PM Bristol BMonday 02:15PM – 04:15PM Bristol B

Krylov Subspace Methods for Linear Sys-tems

Organizer(s): James Baglama, Eric de Sturler

Monday 02:15PM – 04:15PM PatriotsMonday 04:45PM – 06:45PM Patriots

Linear Algebra Education Issues

Organizer(s): Avi Berman, Sang-Gu Lee, StevenLeon

Thursday 01:30PM – 03:30PM Bristol BFriday 10:00AM – 12:00PM Bristol BFriday 02:15PM – 04:15PM Bristol B

Linear Algebra Problems in Quantum Com-putation

Organizer(s): Chi-Kwong Li, Yiu Tung Poon

Monday 10:00AM – 12:00PM Grand BallroomMonday 02:15PM – 04:15PM Grand BallroomMonday 04:45PM – 06:45PM Grand BallroomTuesday 10:00AM – 12:00PM Grand BallroomFriday 04:45PM – 06:45PM Ocean

Linear Algebra, Control, and Optimization

Organizer(s): Biswa Datta

Tuesday 10:00AM – 12:00PM Bristol BTuesday 02:15PM – 04:15PM Bristol B

Linear and Nonlinear Perron-Frobenius The-ory

Organizer(s): Thomas J. Laffey, Bas Lemmens,Raphael Loewy

Thursday 01:30PM – 03:30PM Bristol AThursday 05:30PM – 07:30PM Grand Ballroom

Linear Complementarity Problems and Be-yond

Organizer(s): Sou-Cheng Choi

Tuesday 02:15PM – 04:15PM Bristol A

Linear Least Squares Methods: Algorithms,Analysis, and Applications

Organizer(s): Sanzheng Qiao, Huaian Diao

Monday 10:00AM – 12:00PM Patriots

Matrices and Graph Theory

Organizer(s): Louis Deaett, Leslie Hogben

Tuesday 02:15PM – 04:15PM Grand BallroomTuesday 04:45PM – 06:45PM Grand BallroomWednesday 10:00AM – 12:00PM OceanThursday 10:00AM – 12:00PM Ocean

Matrices and Orthogonal Polynomials

Organizer(s): Jeff Geronimo, Francisco Marcellan,Lothar Reichel

Wednesday 10:00AM – 12:00PM Bristol AThursday 10:00AM – 12:00PM Grand BallroomThursday 01:30PM – 03:30PM Grand BallroomFriday 10:00AM – 12:00PM Grand Ballroom

36

Page 39: CMR17 - ILAS 2013

Matrices and Total Positivity

Organizer(s): Jorge Delgado, Shaun Fallat, JuanManuel Pena

Friday 02:15PM – 04:15PM Bristol AFriday 04:45PM – 06:45PM Bristol A

Matrices over Idempotent Semirings

Organizer(s): Martin Gavalec, Sergei Sergeev

Monday 04:45PM – 06:45PM Bristol B

Matrix Inequalities

Organizer(s): Minghua Lin, Fuzhen Zhang

Thursday 05:30PM – 07:30PM Bristol AFriday 10:00AM – 12:00PM Bristol AFriday 02:15PM – 04:15PM Ocean

Matrix Methods for Polynomial Root-Finding

Organizer(s): Dario Bini, Yuli Eidelman, Marc VanBarel, Pavel Zhlobich

Tuesday 10:00AM – 12:00PM OceanTuesday 02:15PM – 04:15PM OceanTuesday 04:45PM – 06:45PM Ocean

Matrix Methods in Computational SystemsBiology and Medicine

Organizer(s): Konstantin Fackeldey, Amir Nikne-jad, Marcus Weber

Friday 04:45PM – 06:45PM Bristol B

Multilinear Algebra and Tensor Decomposi-tions

Organizer(s): Lieven De Lathauwer, Eugene Tyr-tyshnikov

Tuesday 10:00AM – 12:00PM PatriotsTuesday 02:15PM – 04:15PM PatriotsTuesday 04:45PM – 06:45PM Patriots

Nonlinear Eigenvalue Problems

Organizer(s): Froilan M. Dopico, VolkerMehrmann, Francoise Tisseur

Monday 10:00AM – 12:00PM OceanMonday 02:15PM – 04:15PM OceanMonday 04:45PM – 06:45PM Ocean

Randomized Matrix Algorithms

Organizer(s): Ravi Kannan, Michael Mahoney, Pet-ros Drineas, Nathan Halko, Gunnar Martinsson,Joel Tropp

Tuesday 10:00AM – 12:00PM Bristol A

Sign Pattern Matrices

Organizer(s): Frank Hall, Zhongshan (Jason) Li,Hein van der Holst

Thursday 05:30PM – 07:30PM PatriotsFriday 10:00AM – 12:00PM Patriots

Structure and Randomization in MatrixComputations

Organizer(s): Victor Pan, Jianlin Xia

Wednesday 10:00AM – 12:00PM PatriotsThursday 10:00AM – 12:00PM PatriotsThursday 01:30PM – 03:30PM Patriots

Structured Matrix Functions and their Ap-plications (Dedicated to Leonia Lerer on theoccasion of his 70th birthday)

Organizer(s): Rien Kaashoek, Hugo Woerdeman

Thursday 05:30PM – 07:30PM Bristol BFriday 02:15PM – 04:15PM Grand BallroomFriday 04:45PM – 06:45PM Grand Ballroom

Symbolic Matrix Algorithms

Organizer(s): Jean-Guillaume Dumas, Mark Gies-brecht

Wednesday 10:00AM – 12:00PM Bristol BThursday 10:00AM – 12:00PM Bristol B

37

Page 40: CMR17 - ILAS 2013

Abstracts

Abstract Interpolation and Linear Algebra

Quaternionic inner product spaces and interpo-lation of Schur multipliers for the slice hyper-holomorphic functions

Daniel Alpay, Vladimir Bolotnikov, Fabrizio Colombo,Irene Sabadini

In the present paper we present some aspects of Schuranalysis in the setting of slice hyperholomorphic func-tions. We study various interpolation problems for Schurmultipliers. To that purpose, we first present results onquaternionic inner product spaces. We show in particu-lar that a uniformly positive subspace in a quaternionicKrein space is ortho-complemented.

Data-driven Interpolatory Model Reduction

Christopher Beattie

Dynamical systems are a principal tool in the model-ing and control of physical processes in engineering andthe sciences. Due to increasing complexity of underly-ing mathematical models, model reduction has becomean indispensable tool in harnessing simulation in thedesign/experiment refinement cycle, particularly thosethat involve parameter studies and uncertainty analy-sis. This presentation focusses on the development ofefficient interpolatory model reduction strategies thatare capable of producing optimal reduced order models.Particular attention is paid to Loewner matrix methodsthat utilize empirical data provided either by physicalexperiments or by ancillary simulations.

Interpolation by contractive multipliers from Hardyspace to weighted Hardy space

Vladimir Bolotnikov

We will show that any contractive multiplier from theHardy space to a weighted Hardy space can be factoredas a fixed factor composed with the classical Schur mul-tiplier (contractive multiplier between Hardy spaces).The result will be then applied to get a realizationresult for weighted Hardy-inner functions and, on theother hand, some results on interpolation for a Hardy-to-weighted-Hardy contractive multiplier class. The talkis based on a joint work with Joseph A. Ball.

Potapov’s fundamental matrix inequalities andinterpolation problems

Quanlei Fang

As an extension of Potapovs approach of fundamentalmatrix inequalities to interpolation problems, abstractinterpolation problems have been studied extensivelysince 1980s. We will discuss some aspects of possible

generalizations of the Potapov fundamental matrix in-equalities and related interpolation problems.

Uncertainty and Sensitivity of Analytic Interpo-lation Methods in Spectral Analysis

Johan Karlsson, Tryphon Georgiou

Spectral analysis of a time series is often posed as an an-alytic interpolation problem based on estimated second-order statistics. Here the family of power spectra whichare consistent with a given range of interpolation valuesrepresents the uncertainty set about the “true” powerspectrum. Our aim is to quantify the size of this un-certainty set using suitable notions of distance, and inparticular, to compute the diameter of the set since thisrepresents an upper bound on the distance between anychoice of a nominal element in the set and the “true”power spectrum. Since the uncertainty set may containpower spectra with lines and discontinuities, it is nat-ural to quantify distances in the weak topology—thetopology defined by continuity of moments. We pro-vide examples of such weakly-continuous metrics andfocus on particular metrics for which we can explicitlyquantify spectral uncertainty.

We then specifically consider certain high resolutiontechniques which utilize Nevanlinna-Pick interpolation,and compute worst-case a priori uncertainty boundssolely on the basis of the interpolation points. Thisallows for a priori selection of the interpolation pointsfor minimal uncertainty over selected frequency bands.

“Wrong” side interpolation by positive real ra-tional functions

Daniel Alpay, Vladimir Bolotnikov, Izchak Lewkowicz

We here show that arbitrary m nodes in the open lefthalf of the complex plane, can always be mapped toanywhere in the complex plane by rational positive realfunctions of degree of at most, m. Moreover we intro-duce a parameterization of all these interpolating func-tions.

Rational matrix solutions to the Leech equation:The Ball-Trent approach revisited

Sanne ter Horst

Using spectral factorization techniques, a method isgiven by which rational matrix solutions to the Leechequation with rational matrix data can be computed ex-plicitly. This method is based on an approach by J.A.Ball and T.T. Trent, and generalizes techniques fromrecent work of T.T. Trent for the case of polynomialmatrix data.

Multiscale stochastic processes: prediction andcovariance extension

38

Page 41: CMR17 - ILAS 2013

Dan Volok

Prediction and covariance extension of multiscale stochas-tic processes, which arise in the wavelet filter theory,require an efficient algorithm for inversion of positivematrices with a certain structure. We shall discuss suchan algorithm (a recursion of the Schur-Levinson type)and its applications. This talk is based on joint workwith Daniel Alpay.

Advances in Combinatorial Matrix Theory andits Applications

Several types of Bessel numbers generalized andsome combinatorial setting

Gi-Sang Cheon

The Bessel numbers and the Stirling numbers have along history and both have been generalized in a varietyof directions. Here we present a second level generaliza-tion that has both as special cases. This generalizationoften preserves the inverse relation between the firstand second kind and has simple combinatorial interpre-tations. We also frame the discussion in terms of theexponential Riordan group and Riordan matrices. Thenthe inverse relation is just the matrix inverse and fac-toring inside the group leads to many results connectingthe various Stirling and Bessel numbers.

This work is jointed with Ji-Hwan Jung and Louis Shapiro.

On the P-sets of acyclic matrices

Zhibin Du, Carlos da Fonseca

In this talk we characterize those trees for which thenumber of P-vertices is maximal.

Majorization transforms and Ryser’s algorithm

Geir Dahl

The notion of a transfer is central in the theory of ma-jorization; in particular it lies behind the characteriza-tion of majorization in terms of doubly stochastic ma-trices. We introduce a new form of transfer and provesome of its properties. Moreover, we discuss a connec-tion to Ryser’s algorithm for constructing (0,1)-matriceswith given row and column sums.

Eigenvalues, Multiplicities and Linear Trees

Charles Johnson

Let H(G) be the collection of all Hermitian matriceswith graph G. When G is a tree, T, there are many im-portant results about the collection of multiplicity listsfor the eigenvalues among matrices in H(T). However,some even stronger statements begin to fail for rather

large trees. This seems to be not because of the numberof vertices, but because the stronger statements are truefor“linear” trees. (All trees on fewer than 10 vertices arelinear.) We mention some recent work on linear treesand their multiplicity lists.

Patterns of Alternating Sign Matrices

Kathleen Kiernan, Richard Brualdi, Seth Meyer, MichaelSchroeder

We look at some properties of the zero-nonzero pat-terns of alternating sign matrices (ASMs). In particu-lar, we show the minimal term rank of an n×n ASM is⌈2√n+ 1− 2

⌉. We also discuss maximal ASMs, with

respect to the notion of ASM extension. An ASM Ais an ASM extension of ASM B if the matrices are thesame size, and any non-zero entries of B are strictlycontained within the non-zero entries of A.

Creating a user-satisfaction index

In-Jae Kim, Brian Barthel

In this talk it is shown how to use PageRank central-ity to construct a parsimonious survey instrument, andthen how to create a user-satisfaction index based onthe survey using multivariate methods.

Good Bases for Nonnegative Realizations

Judi McDonald

Nonnegative and eventually nonnegative matrices areimportant in many applications. The essence of non-negativity is two-fold it depends on the spectrum andthe basis of eigenvectors. In this talk, good potentialbases for nonnegative and eventually nonnegative ma-trices will be illustrated, as well as effective ways toorganize these bases.

Large set of equiangular lines in Euclidean spaces

Ferenc Szollosi

A set of lines, represented by the unit vectors v1, v2, . . . , vkare called equiangular, if there is a constant d such that| 〈vi, vj〉 | = d for all 1 6 i < j 6 k. In this talk wegive an overview of the known constructions of maxi-mal set of equiangular lines and present a new generallower bound on the number of equiangular lines in realEuclidean spaces.

A combinatorial approach to nearly uncoupledMarkov chains

Ryan Tifenbach

AMarkov chain is a sequence of random variables x0, x1, . . .that take on values in a state space S. A setA ⊆ S is re-

39

Page 42: CMR17 - ILAS 2013

ferred to as an almost invariant aggregate if transitionsfrom xt to xt+1 where xt ∈ A and xt+1 /∈ A are ex-ceedingly rare. A Markov chain is referred to as nearlyuncoupled if there are two or more disjoint almost in-variant aggregates contained in its state space. Nearlyuncoupled Markov chains are characterised by long pe-riods of relatively constant behaviour, punctuated bysudden, extreme changes. We present algorithms forproducing almost invariant aggregates of a nearly un-coupled Markov chain, given in the form of a stochasticmatrix. These algorithms combine a combinatorial ap-proach with the utilisation of the stochastic complement– a tool from the theory of stochastic matrices.

Companion Matrices and Spectrally ArbitraryPatterns

Kevin Vander Meulen, Brydon Eastman, In-Jae Kim,Bryan Shader

We explore some sparse spectrally arbitrary matrix pat-terns. We focus on patterns with entries in ∗,#, 0,with ∗ indicating that the entry is a nonzero real num-ber and # representing an arbitrary real number. Assuch, the standard companion matrix of order n can beconsidered to be a spectrally arbitrary ∗,#, 0-patternwith (n− 1) ∗ entries and n # entries. We characterizeall patterns which, like the standard companion matri-ces, uniquely realize every possible spectrum for a realmatrix. We compare some of our work to recent resultson companion matrices (including Fiedler companionmatrices), as well as existing research on spectrally ar-bitrary zero-nonzero patterns.

Applications of Tropical Mathematics

Tropicalizing the simplex algorithm

Xavier Allamigeon, Pascal Benchimol, Stephane Gaubert,Michael Joswig

We present an analogue of the simplex algorithm, al-lowing one to solve tropical (max-plus) linear program-ming problems. This algorithm computes, using signedtropical Cramer determinants, a sequence of feasible so-lutions which coincides with the image by the valuationof the sequence produced by the classical simplex algo-rithm over the ordered field of Puiseux series. This ismotivated in particular by (deterministic) mean payoffgames (which are equivalent to tropical linear program-ming).

Tropical eigenvalues

James Hook, Francoise Tisseur

Tropical eigenvalues can be used to approximate theorder of magnitude of the eigenvalues of a classical ma-trix or matrix polynomial. This approximation is usefulwhen the classical eigenproblem is badly scaled. For ex-

ample

A =

[105 −104

i 10−2

], Val(A) =

[5 40 −2

].

The classical matrix A has eigenvalues λ1 = (−1 +0.000001i) × 105 and λ2 = (−0.01 − i) × 10−1. Bytaking the componentwise-log-of-absolute-value A cor-responds to the max-plus matrix Val(A), which hastropical eigenvalues tλ1 = 5 and tλ2 = −1.

In this talk I will give an overview of the theory oftropical eigenvalues for matrices and matrix polynomi-als emphasising the relationship between tropical andclassical spectra.

Tropical matrix semigroups

Marianne Johnson

In this talk I will discuss some of my recent researchwith Mark Kambites and Zur Izhakian. In particular,I will give some structural results about the semigroupof ‘tropical matrices‘, that is, the set of all matriceswith real entries, with respect to a ‘tropical‘ multiplica-tion defined by replacing the usual addition of numbersby maximisation, and replacing the usual multiplica-tion of numbers by addition. I will give some infor-mation about the idempotent tropical matrices (that is,the matrices which square (tropically) to themselves)and their corresponding groups.

Extremal properties of tropical eigenvalues andsolutions of tropical optimization problems

Nikolai Krivulin

We consider linear operators on finite-dimensional semi-modules over idempotent semifields (tropical linear op-erators) and examine extremal properties of their spec-trum. It turns out that the minimum value for somenonlinear functionals that involve tropical linear oper-ators and use multiplicative conjugate transposition isdetermined by their maximum eigenvalue, where theminimum and maximum are thought in the sense ofthe order induced by the idempotent addition in thecarrier semifield. We exploit the extremal propertiesto solve multidimensional optimization problems formu-lated in the tropical mathematics setting as to minimizethe functionals under constraints in the form of linearequations and inequalities. Complete closed-form solu-tions are given by using a compact vector representa-tion. Applications of the results to real-world problemsare presented, including solutions to both unconstrainedand constrained multidimensional minimax single facil-ity location problems with rectilinear and Chebyshevdistances.

Integrality in max-algebra

Peter Butkovic, Marie MacCaig

The equations Ax 6 b, Ax = b, Ax 6 λx, Ax = λx,

40

Page 43: CMR17 - ILAS 2013

Ax = By and Ax = Bx have been extensively studiedin max-algebra and methods to find solutions from R

are known. We consider the existence and descriptionof integer solutions to these equations. First we showhow existing methods for finding real solutions to theseequations can be adapted to create methods for findinginteger solutions or determining none exist. Further, inthe cases when these methods do not provide a polyno-mial algorithm, we present a generic case which admitsa strongly polynomial method for finding integer solu-tions.

Tropical bounds for eigenvalues of matrices

Andrea Marchesini, Marianne Akian, Stephane Gaubert

We show that the product of the moduli of the k largesteigenvalues of a matrix is bounded from above by theproduct of its k largest tropical eigenvalues, up to acombinatorial constant which is expressed in terms ofa permanental compound matrix. For k=1, this spe-cializes to an inequality of Friedland (1986). We thendiscuss the optimality of such upper bounds for certainspecial classes of matrices, and give conditions underwhich analogous lower bounds can also be obtained.

Generating sets of tropical hemispaces

Viorel Nitica, Sergey Sergeev, Ricardo Katz

In this paper we consider tropical hemispaces, definedas tropically convex sets whose complements are alsotropically convex, and tropical semispaces, defined asmaximal tropically convex sets not containing a givenpoint. We characterize tropical hemispaces by meansof generating sets, composed of points and rays, thatwe call (P;R)-representations. With each hemispace weassociate a matrix with coefficients in the completedtropical semiring, satisfying an extended rank-one con-dition. Our proof techniques are based on homogeniza-tion (lifting a convex set to a cone), and the relationbetween tropical hemispaces and semispaces.

Computation of the eigenvalues of a matrix poly-nomial by means of tropical algebra

Marianne Akian, Dario Bini, Stephane Gaubert, VanniNoferini, Meisam Sharify

We show that the moduli of the eigenvalues of a matrixpolynomial are bounded by the tropical roots. Thesetropical roots, which can be computed in linear time,are the non-differentiability points of an auxiliary trop-ical polynomial, or equivalently, the opposites of theslopes of its Newton polygon. This tropical polyno-mial depends only on the norms of the matrix coeffi-cients. We extend to the case of matrix polynomialssome bounds obtained by Hadamard, Ostrowski andPolya for the roots of scalar polynomials. These resultsshow that the tropical roots can provide an a priori es-timation of the moduli of the eigenvalues which can beapplied in the numerical computation of the eigenval-

ues. We developed a scaling method by using tropicalroots and we showed experimentally that this scalingimproves the backward stability of the computations,particularly in situations in which the data have var-ious orders of magnitude. We also showed that usingtropical roots as the initial approximations in Ehrlich-Aberth method, can substantially decrease the numberof iterations in the numerical computation of the eigen-values.

Generalized Inverses and Applications

Generalized inverses and extensions of the Sherman-Morrison-Woodbury formulae

Nieves Castro-Gonzalez, M. Francisca Martinez-Serrano,Juan Robles

The Sherman-Morrison-Woodbury (SMW) formulae re-late the inverse of a matrix after a small rank perturba-tion to the inverse of the original matrix. Various au-thors extended the SMW formulae for some cases whenthe matrix is singular or rectangular.

In this work we focus on complex matrices of the typeA + Y GZ∗ and we develop expressions for the Moore-Penrose inverse of this sum. We obtain generalizationsof the formula

(A+ Y GZ∗)† = A† −A†Y (G† + Z∗A†Y )†Z∗A†,

which was established under certain conditions. Wesurvey some known results on generalized inverses andtheir applications to the SMW-type formulae and presentseveral extensions.

Matrix groups constructed from R, s+1, k-potentmatrices

Minnie Catral, Leila Lebtahi, Jeffrey Stuart, NestorThome

A matrix A ∈ Cn×n is called R, s + 1, k-potent if Asatisfies RA = As+1R, where R ∈ Cn×n and Rk = In.Such matrices were introduced and studied in [L. Leb-tahi, J. Stuart, N. Thome, J.R. Weaver. Matrices Asuch that RA = As+1R when Rk = I, Linear Algebraand its Applications, http://dx.doi.org/10.1016/j.laa.2012.10.034,in press, 2013], and one of the properties noted in thatpaper was that these matrices are generalized group in-vertible. The aim of this talk is to construct a matrixgroup arising from a fixed R, s + 1, k-potent matrixand study some properties of this group.

Reverse order law for the generalized inverses

Dragana Cvetkovic-Ilic

The reverse order law for the Moore Penrose inverseseems to have been studied first by Greville , in the’60s , giving a necessary and sufficient condition for the

41

Page 44: CMR17 - ILAS 2013

reverse order law

(AB)† = B†A† (1)

for matrices A and B. This result was followed by manyinteresting other results concerning the reverse orderlaw for the weighted Moore-Penrose inverse, the reverseorder law for the product of three or more matrices or ingeneral case the reverse order law for K-inverses, whereK ⊆ 1, 2, 3, 4.

We shall examine the reverse order law for 1, 3, 1, 4,1, 2, 3 and 1, 2, 4-inverses in the setting of C∗-algebras,and in that case we will present purely algebraic nec-essary and sufficient conditions under which this typeof reverse order law holds. In the case of 1, 3, 4–inverses, we will prove that (AB)1, 3, 4 ⊆ B1, 3, 4 ·A1, 3, 4 ⇔ (AB)1, 3, 4 = B1, 3, 4·A1, 3, 4.Also,we will consider the mixed type reverse order law andthe absorption laws for the generalized inverses.

On invertibility of combinations of k-potent op-erators

Chunyuan Deng

In this talk, we will report some recent results on thegeneral invertibility of the products and differences ofprojectors and generalized projectors. The invertibility,the group invertibility and the k-potency of the linearcombinations of k-potents are investigated, under cer-tain commutativity properties imposed on them. In ad-dition, the range relations of projectors and the detailedrepresentations for various inverses are presented.

On the Level-2 Condition Number for MooreCPen-rose Inverse in Hilbert Space

Huaian Diao, Yimin Wei

We prove that cond†(T )−1 6 cond[2]† (T ) 6 cond†(T )+1

where T is a linear operator in a Hilbert space, cond†(T )is the condition number of computing its Moore-Penroseinverse, and cond

[2]† (T ) is the level-2 condition number

of this problem.

Formulas for the Drazin inverse of block anti-triangular matrices

Esther Dopazo, M.Francisca Martinez-Serrano, Juan Rob-les

Let A be an n× n complex matrix. The Drazin inverseof A is the unique matrix AD satisfying the relations:

ADAAD = AD, ADA = AAD, Ak+1AD = A,

where k = ind(A) is the smallest nonnegative integersuch that rank(Ak) = rank(Ak+1). If ind(A) = 1,then AD is the group inverse of A. The concept ofDrazin inverse plays an important role in various fieldslike Markov chains, singular differential and differenceequations, iterative methods, etc.

Deriving expressions for generalized inverses of matriceshas been a topic that has received considerable atten-tion in the literature over the last few years.

In particular, the problem of obtaining explicit formulasfor the inverse of Drazin and the group inverse (in case

of existence) of a 2 × 2 block matrix M =

(A BC D

),

where A and D are square matrices, was posed as anopen problem by Campbell and Meyer in 1979. Sincethen, an important formula has been given for the caseof block triangular matrices, and recently some partialresults have been obtained under specific conditions,but the general problem still remains open.

In 1983, Campbell proposed the problem of obtainingrepresentations of the Drazin inverse of anti-triangular

matrices of the formM =

(A BC 0

), in conecction with

the problem to find general expressions for the solutionsof second-order systems of the differential equations.Furthermore, these kind of matrices appear in other ap-plications like graph theory, saddle-point problems andsemi-iterative methods.

Therefore, in this work we focus on deriving formulasfor the Drazin inverse of 2× 2 anti-triangular matriceswhich extend results given in the literature. Moreoverthe proposed formulas will be applied to some specialstructured matrices.

This research has been partly supported by ProjectMTM2010-18057, “Ministerio de Ciencia e Innovacion”of Spain.

Representations for the generalized Drazin in-verse of block matrices in Banach algebras

Dijana Mosic

We present explicit representations of the generalizedDrazin inverse of a block matrix with the generalizedSchur complement being generalized Drazin invertiblein a Banach algebra under some conditions. The pro-vided results extend earlier works given in the literature.

Krylov Subspace Methods for Linear Systems

Restarted Block Lanczos Bidiagonalization Al-gorithm for Finding Singular Triplets

James Baglama

A restarted block Lanczos bidiagonalization method isdescribed for computing, a few of the largest, smallest,or near a specified positive number, singular tripletsof a large rectangular matrix. Leja points are used asshifts in the implicitly restarted Lanczos bidiagonaliza-tion method. The new method is often computationalfaster than other implicitly restarted Lanczos bidiag-onalization methods, requires a much smaller storagespace, and can be used to find interior singular values.Computed examples show the new method to be com-

42

Page 45: CMR17 - ILAS 2013

petitive with available schemes.

The convergence behavior of BiCG

Eric de Sturler, Marissa Renardy

The Bi-Conjugate Gradient method (BiCG) is a well-known Krylov method for linear systems of equations,proposed about 35 years ago. It forms the basis forsome of the most successful iterative methods today,like BiCGSTAB. Nevertheless, the convergence behav-ior is poorly understood. The method satisfies a Petrov-Galerkin property, and hence its residual is constrainedto a space of decreasing dimension. However, that doesnot explain why, for many problems, the method con-verges in, say, a hundred or a few hundred iterationsfor problems involving a hundred thousand or a mil-lion unknowns. For many problems, BiCG convergesnot much slower than an optimal method, like GMRES,even though the method does not satisfy any optimal-ity properties. In fact, Anne Greenbaum showed thatevery three-term recurrence, for the first (n/2)+1 iter-ations (for a system of dimension n), is BiCG for someinitial ’left’ starting vector. So, why does the methodwork so well in most cases? We will introduce Krylovmethods, discuss the convergence of optimal methods,describe the BiCG method, and provide an analysis ofits convergence behavior.

Robust and Scalable Schur Complement Methodusing Hierarchical Parallelism

Xiaoye Li, Ichitaro Yamazaki

Modern numerical simulations give rise to sparse lin-ear equations that are becoming increasingly difficult tosolve using standard techniques. Matrices that can bedirectly factorized are limited in size due to large mem-ory and inter-node communication requirements. Pre-conditioned iterative solvers require less memory andcommunication, but often require an effective precondi-tioner, which is not readily available. The Schur com-plement based domain decomposition method has greatpotential for providing reliable solution of large-scale,highly-indefinite algebraic equations.

We developed a hybrid linear solver PDSLin, ParallelDomain decomposition Schur complement based LIN-ear solver, which employs techniques from both sparsedirect and iterative methods. In this method, the globalsystem is first partitioned into smaller interior subdo-main systems, which are connected only through theseparators. To compute the solution of the global sys-tem, the unknowns associated with the interior sub-domain systems are first eliminated to form the Schurcomplement system, which is defined only on the sepa-rators. Since most of the fill occurs in the Schur com-plement, the Schur complement system is solved usinga preconditioned iterative method. Then, the solutionon the subdomains is computed by using the partialsolution on the separators and solving another set ofsubdomain systems.

For a scalable implementation, it is imperative to ex-ploit multiple levels of parallelism; namely, solving in-dependent subdomains in parallel and using multipleprocessors per subdomain. This hierarchical parallelismcan maintain numerical stability as well as scalabil-ity. In this multilevel parallelization framework, per-formance bottlenecks due to load imbalance and exces-sive communication occur at two levels: in an intra-processor group assigned to the same subdomain andamong inter-processor groups assigned to different sub-domains. We developed several techniques to addressthese issues, such as taking advantage of the sparsityof right-hand-side columns during sparse triangular so-lutions with interfaces, load balancing sparse matrix-matrix multiplication to form update matrices, and de-signing an effective asynchronous point-to-point com-munication of the update matrices. We present exper-imental results to demonstrate that our hybrid solvercan efficiently solve very large highly-indefinite linearsystems on thousands of processors.

GMRES convergence bounds that depend onthe right-hand side vector

David Titley-Peloquin, Jennifer Pestana, Andrew Wa-then

GMRES is one of the most popular Krylov subspacemethods for solving linear systems of equations, Bx = b,B ∈ Cn×n, b ∈ Cn. However, obtaining convergencebounds for GMRES that are generally descriptive isstill considered a difficult problem. In this talk, weintroduce bounds for GMRES applied to systems withnonsingular, diagonalizable coefficient matrices that ex-plicitly include the initial residual vector. The first ofthese involves a polynomial least-squares problem onthe spectrum of B, while the second reduces to an idealGMRES problem on a rank-one modification of the di-agonal matrix of eigenvalues of B. Numerical exper-iments demonstrate that these bounds can accuratelydescribe GMRES convergence.

Implicitly Restarting the LSQR Algorithm

James Baglama, Daniel Richmond

LSQR is an iterative algorithm for solving large sparseleast squares problems min

x‖b− Ax‖2. For certain ma-

trices A, LSQR can exhibit slow convergence. Results inthis presentation pertain to a strategy used to speed upthe convergence of LSQR, namely, using an implicitlyrestarted Golub-Kahan bidiagonalization. The restart-ing is used to improve the search space and is carried outby applying the largest harmonic Ritz values as implicitshifts while simultaneously using LSQR to compute thesolution to the minimization problem. Theoretical re-sults show why this strategy is advantageous and nu-merical examples show this method to be competitivewith existing methods.

Using ILU(0) to estimate the diagonal of the in-

43

Page 46: CMR17 - ILAS 2013

verse of a matrix.

Andreas Stathopoulos, Lingfei Wu, Jesse Laeuchli, StratisGallopoulos, Vasilis Kalatzis

For very large, sparse matrices, the calculation of thediagonal or even the trace of their inverse is a verycomputationally intensive task that involves the use ofMonte Carlo (MC). Hundreds or thousands of quadra-tures (xTA−1x) are averaged to obtain 1-2 digits ofaccuracy in the trace. An ILU(k) preconditioner canspeed up the computation of the quadratures, but itcan also be used to reduce the MC variance (e.g., byestimating the diagonal of A−1 − (LU)−1). Such vari-ance reduction may not be sufficient, however. We takea different approach based two observations.

First, there is an old, but not much used algorithmto obtain the diagonal of M = (LU)−1 when L,U aresparse, in time linear to the number of nonzero elementsof L,U . For some matrices, the factors from the ILU(0)preconditioner provide a surprisingly accurate estimatefor the trace of A−1. Second, we observed that, in gen-eral, the diag(M) captures the pattern (if not the indi-vidual values) of the diag(A−1) very well. Therefore, ifwe solved for the exact A−1

jj = eTj A−1ej , for j = 1, . . . , k

anchor points, we could find an interpolating functionf(Mjj , j) ≈ A−1

jj , for all j. Preliminary experimentshave shown that k = 100 is often enough to achieveO(10−2) accuracy in the trace estimation, much fasterthan classical MC.

MPGMRES: a generalized minimum residual methodwith multiple preconditioners

Chen Greif, Tyrone Rees, Daniel Szyld

Standard Krylov subspace methods only allow the userto choose a single preconditioner, although in many sit-uations there may be a number of possibilities. Here wedescribe an extension of GMRES, multi-preconditionedGMRES, which allows the use of more than one precon-ditioner. We give some theoretical results, propose apractical algorithm, and present numerical results fromproblems in domain decomposition and PDE-constrainedoptimization. These numerical experiments illustratethe applicability and potential of the multi-preconditionedapproach.

Linear Algebra Education Issues

A Modern Online Linear Algebra Textbook

Robert Beezer

A First Course in Linear Algebra is an open source text-book which debuted in 2004. The use of an open licensemakes it possible to construct, distribute and employthe book in ways that would be impossible with a pro-prietary text.

I will briefly describe the organization of the book and

some novel features of the content. Several major tech-nological enhancements were made to the book in thelast six months of 2012, which have resulted in an onlineversion that is unlike any other mathematics textbook.I will describe and demonstrate these changes, how theywere implemented and discuss how they have changedthe way I teach introductory linear algebra.

Flipping the Technology in Linear Algebra

Tom Edgar

There are many excellent textbooks that incorporatethe use of Sage or other computer algebra packages fora Linear Algebra course. What if you prefer anothertext, are part of a multi-section course, or your depart-ment does not give you the freedom to choose your text-book? In this talk we will discuss the possibilities andpracticalities of incorporating the use of technology (inparticular Sage) in any linear algebra class. We de-tail our experience of creating mini-videos to teach thebasics of Sage and designing out-of-class projects thatrequire computer use to enhance students understand-ing of key concepts or to introduce them to applicationsof linear algebra. Using the “flipped classroom” idea fortechnology provides the possibility of using technologyin any class without changing the structure of the class.In particular, we will describe the process of creatingvideos, discuss the problems and successes we encoun-tered, and make suggestions for future classes. Thoughwe have only limited experience with this method, wealso intend to describe some of the initial, perceivedpedagogical benefits to incorporating technology intocourses in this way.

The Sage Cell server and other technology inteaching

Jason Grout

We will discuss uses of technology in teaching linear al-gebra, including use of the new Sage Cell Server (https://sagecell.sagemath.org). Sage (http://sagemath.org) is a free open-source math software system thatincludes both numerical and theoretical linear algebracapabilities. For example, in addition to standard ma-trix and vector operations over various base rings andfields, you can also directly create and manipulate vec-tor spaces and linear transformations. The Sage Cellserver makes it easy to embed arbitrary Sage or Oc-tave (Matlab-like) computations and interactive explo-rations into a web page, as links in a pdf file, or as QRcodes in a printed book. Using technology can be a pow-erful way to allow student exploration and to reinforceclassroom discussion.

Linear algebra applications to data sciences

Sandra Kingan

Typical data sciences courses list elementary statisticsand linear algebra as prerequisites, but the linear al-

44

Page 47: CMR17 - ILAS 2013

gebra that students are expected to know includes, forexample, multiple regression, singular value decomposi-tion, principal component analysis, and latent semanticindexing. These topics are rarely covered in a first lin-ear algebra course. As a result, for many students thesetechniques become a few lines of code accompanied bya vague understanding. In this talk I will describe thedevelopment of a rigorous course on the mathematicalmethods for analyzing data that also incorporates mod-ern real-world examples. Development of this coursewas supported by an NSF TUES grant.

Interactive Mobile Linear Algebra with Sage

Sang-Gu Lee

For over 20 years, the issue of using an adequate com-puter algebra system (CAS) tool in teaching and learn-ing of linear algebra has been raised constantly. A vari-ety of CAS tools were introduced in several linear alge-bra textbooks. However, due to some realistic problemsin Korea, it has not been introduced in the class andthe theoretical aspect of linear algebra has been focusedin teaching and learning of it. We have tried to find ormake a simple and comprehensive method for using ICTin our classes.

Nowadays, most of our students have a smartphone.We found that Sage could be an excellent solution forthem to use it for mathematical computation and simu-lation. To achieve our objectives, we started to developweb contents and smartphone apps to design our lec-tures in mobile environment since 2009. We have devel-oped tools for Mobile mathematics with smartphonesin teaching of linear algebra. Furthermore, we offeredseveral QR codes to our students with the goal of help-ing the student to learn some linear algebra conceptsvia problems solving.

In this talk, we introduce our mobile infrastructure ofthe Sage and how ONE could use it in linear algebraclasses. Our talk will focus on what we have done inour linear algebra class with interactive mobile-learningenvironment.

The Role of Technology in Linear Algebra Edu-cation

Judi McDonald

What are the opportunities and challenges created bymodern technology? What role should technology playin Linear Algebra Education? How/when is it advanta-geous to use a computer for presentations? When arehand written ”live” presentations better? What shouldgo into e-books? How do professors handle the fact thatstudents can ”google” the solutions? What are the ad-vantages and disadvantages of online homework? Howcan clickers be used effectively in the classroom? Asyou can see, I have many questions about where mod-ern technology fits into the learning of linear algebraand will use this talk to get faculty thinking about boththe opportunities and the challenges

A proof evaluation exercise in an introductorylinear algebra course

Rachel Quinlan

In a first year university linear algebra course, studentswere presented with five purported proofs of the state-ment that every linear transformation of R2 fixes theorigin. The students were familiar with the definitionof a linear transformation as an additive function thatrespects scalar multiplication, and with the matrix rep-resentation of a linear transformation of R2. They wereasked to consider each argument and decide on its cor-rectness. They were also asked to rank the five “proofs”in order of preference and to comment on their rankings.

Some recent work of K. Pfeiffer proposes a theoreticalframe for the consideration of proof evaluation and ofstudents’ responses to tasks of this kind. With this inmind, we discuss our students’ assessments of the fivepresented arguments. It will be suggested that exercisesof this nature can give us some useful understandingof our students’ thinking about the mechanisms andpurposes of proof and (in this instance) about someessential concepts of linear algebra.

Teaching Linear Algebra in three Worlds of Math-ematical Thinking

Sepideh Stewart

In my PhD thesis I created and applied a theoreti-cal framework combining the strengths of two majormathematics education theories (Talls three worlds andAPOS) in order to investigate the learning of linear al-gebra concepts at the university level. I produced andanalyzed a large data set concerning students embod-ied and symbolic ways of thinking about elementarylinear algebra concepts. It was anticipated that thisstudy may provide suggestions with the potential forwidespread positive consequences for learning mathe-matics. In this talk I highlight aspects of this researchas well as my current project on employing Talls theorytogether with Schoenfelds theory of Resources, Orienta-tions and Goals (ROGs) to address the teaching aspectsof linear algebra.

The Fundamental Theorem of Linear Algebra

Gilbert Strang

This theorem is about the four subspaces associatedwith an m by n matrix A of rank r.

Part 1 (Row rank = Column rank) The column spacesof A and AT have the same dimension r. I would like togive my two favorite proofs (admitting that one camefrom Wikipedia).

Part 2 The nullspaces of AT and A are orthogonal tothose column spaces, with dimensions m− r and n− r.

Part 3 The SVD gives orthonormal bases for the foursubspaces, and in those bases A and AT are diagonal:

45

Page 48: CMR17 - ILAS 2013

Avj = σjuj and ATuj = σjvj with σj > 0 for 1 6 j 6 r.

This step involves the properties of symmetric matricesATA and AAT . Maybe it is true that understandingthis theorem is the goal of a linear algebra course. Thebest example of the SVD is a first difference matrix A.

Good first day examples in linear algebra

David Strong

In any mathematics class, examples that involve mul-tiple concepts are of great educational value. Such anexample is of even more use if it is simple enough tobe introduced on the first day of class. I will discussa few good examples that I have used in my own firstsemester linear algebra course.

Twenty years after The LACSG report, Whathappens in our textbooks and in our classrooms?

Jeffrey Stuart

In 1993, David Carlson, Charles Johnson, David Layand Duane Porter published the Linear Algebra Cur-riculum Study Group Recommendations for the FirstCourse in Linear Algebra in the College MathematicsJournal. Their paper laid out what ought to be em-phasized (or de-emphasized) in a first course, and con-sequently, in the textbook for a first course. An ex-amination of the multitude of linear algebra books cur-rently on the market (at least in the USA) shows that atleast some of the LACSG recommendations have cometo pass: all recent texts that this author has examinedemphasize Rn and matrices over abstract vector spacesand linear functions. Matrix multiplication is usuallydiscussed in terms of linear combinations of columns.The needs of client disciplines are often reflected in a va-riety of applied problems, and in some texts, entire sec-tions or chapters. Numerical algorithms abound. Andyet anyone who has taught from one of the over-stuffedcalculus textbooks knows that what is in a textbookcan differ greatly from what is covered or emphasizedin a course. We will review the content of linear al-gebra textbooks in the USA, and propose a survey ofmathematics departments to better understand what isactually being taught in linear algebra courses.

Designing instruction that builds on student rea-soning in linear algebra: An example from spanand linear independence

Megan Wawro

Research into student thinking in mathematics is of-ten used to inform curriculum design and instructionalpractices. I articulate such an example in the form ofan instructional sequence that supports students’ rein-vention of the concepts of span and linear indepen-dence. The sequence, the creation of which was guidedby the instructional design theory of Realistic Mathe-matics Education, builds on students informal and in-tuitive ways of reasoning towards more conventional,

generalizable, and abstract ways of reasoning. Duringthe presentation I will share the instructional sequenceand samples of student work, as well as discuss under-lying theoretical influences that guided this work.

Linear Algebra Problems in Quantum Compu-tation

Rank reduction for the local consistency prob-lem

Jianxin Chen

We address the problem of how simple a solution can befor a given quantum local consistency instance. Morespecifically, we investigate how small the rank of theglobal density operator can be if the local constraintsare known to be compatible. We prove that any com-patible local density operators can be satisfied by alow rank global density operator. Then we study bothfermionic and bosonic versions of the N-representabilityproblem as applications. After applying the channel-state duality, we prove that any compatible local chan-nels can be obtained through a global quantum chan-nel with small Kraus rank. This is joint work withZhengfeng Ji, Alexander Klyachko, David W. Kribs,and Bei Zeng.

Who’s Afraid of Quantum Computers?

Man-Duen Choi

Suddenly, there arrives the new era of quantum comput-ers, with the most frightening effects of quantum optics.In this expository talk, I will present my own experi-ence of a pure mathematicians adventure in quantumwonderland. In particular, I seek sense and sensibilityof quantum entanglements, with pride and prejudice inmatrix theory.

On separable and entangled states in QIS

Shmuel Friedland

In this talk we discuss the notion of separable and en-tangled states in quantum information science. Firstwe discuss the decidability problem: find if the stateis separable or not. Second we discuss how to find thedistance, called the relative entropy, from an entangledstate to the set of separable states. We will concentrateon the bipartite case.

Quantum measurements and maps preservingstrict convex combinations and pure states

Jinchuan Hou, Lihua Yang

Let S(H) be the convex set of all states (i.e., the positiveoperators with trace one) on a complex Hilbert spaceH. It is shown that a map ψ : S(H) → S(K) with

46

Page 49: CMR17 - ILAS 2013

2 6 dimH <∞ preserves pure states and strict convexcombinations if and only if ψ has one of the forms: (1)ρ 7→ σ0 for any ρ ∈ S(H); (2) ψ(Pur(H)) = Q1, Q2;(3) ρ 7→ MρM∗

Tr(MρM∗), where σ0 is a pure state on K and

M : H → K is an injective linear or conjugate lin-ear operator. For multipartite systems, we also givea structure theorem for maps that preserve separablepure states and strict convex combinations. These re-sults allow us to characterize injective (local) quantummeasurements and answer some conjectures proposedin [J.Phys.A:Math.Theor.45 (2012) 205305].

Linear preserver problems arising in quantuminformation science

Ajda Fosner, Zejun Huang, Chi-Kwong Li, Yiu-TungPoon, Nung-Sing Sze

For a positive integer n, let Mn and Hn be the set ofn×n complex matrices and Hermitian matrices, respec-tively. Suppose m,n > 2 are positive integers. In thistalk, we will characterize the linear maps φ on Mmn

(Hmn) such that

f(φ(A⊗B)) = f(A⊗B)

for all A ∈Mm (Hm) and B ∈Mn (Hn), where f(X) isthe spectrum, the spectral radius, the numerical radius,a higher numerical range, a Ky Fan norm or a Schattennorm of X.

On the Minimum Size of Unextendible ProductBases

Nathaniel Johnston, Jianxin Chen

An unextendible product basis is a set of mutually or-thogonal product vectors in a tensor product Hilbertspace such that there is no other product vector orthog-onal to them all. A long-standing open question asksfor the minimum number of vectors needed to form anunextendible product basis given the local Hilbert spacedimensions. A partial solution was found by Alon andLovasz in 2001, but since then only a few other caseshave been solved. We solve this problem in many spe-cial cases, including the case where the Hilbert space isbipartite (i.e., it is the tensor product of just two localHilbert spaces), as well as the qubit case (i.e., whereevery local Hilbert space is 2-dimensional).

Private quantum subsystems and connections withquantum error correction

David Kribs

No abstract entered.

Facial structures for bipartite separable statesand applications to the separability criterion

Seung-Hyeok Kye

One of the most important criterion for separability isgiven by the operation of partial transpose which is theone sided transpose for the tensor product of matrices.The PPT (positive partial transpose) criterion tells usthat the partial transpose of a separable state must bypositive semi-definite. In the convex set of all PPTstates, the boundary between separability and entan-glement consists of faces for separable states. This iswhy we are interested in the facial structures for theconvex set of all separable states. The whole structuresare far from being understood completely. In this talk,we will restrict ourselves to the two qutrit and qubit-qudit cases to see how the facial structures can be usedto distinguish separable states from entangled states.

Some non-linear preservers on quantum struc-tures

Lajos Molnar

In this talk we present some of our results mostly onnon-linear transformations of certain quantum struc-tures that can be viewed as different kinds of symme-tries.

We discuss order isomorphisms on observables, spectral-order isomorphisms and sequential endomorphisms ofthe space of Hilbert space effects, transformations onquantum states which preserve the relative entropy, ora general quantum f -divergence.

Transformations on density matrices preservingthe Holevo bound

Gergo Nagy, Lajos Molnar

Given a probability distribution λ1, . . . , λm, for anycollectionρ1, . . . , ρm of density matrices the corresponding Holevobound is defined by

χ(ρ1, . . . , ρm) = S

(m∑

k=1

λkρk

)−

m∑

k=1

λkS(ρk),

where S(ρ) = −Tr ρ log ρ is the von Neumann entropy.This is one of the most important numerical quantitiesappearing in quantum information science. In this talkwe present the structure of all such maps φ on the set(or on any dense subset) of density matrices which leavethe Holevo bound invariant i.e. which satisfy

χ(φ(ρ1), . . . , φ(ρm)) = χ(ρ1, . . . , ρm)

for all possible collections ρ1, . . . , ρm. It turns outthat every such map is induced by a unitary matrixand hence our result is closely related to Wigner’s fa-mous theorem on the structure of quantum mechanicalsymmetry transformations. Another result of similarspirit involving quantum Tsallis entropy will also bepresented.

Decomposing a Unitary Matrix into a Productof Weighted Quantum Gates

47

Page 50: CMR17 - ILAS 2013

Chi-Kwong Li, Diane Christine Pelejo, Rebecca Roberts

Let d = 2n for some n. It is known that any uni-tary matrix U ∈Md(C) can be written as a product ofd(d− 1)/2 fully controlled quantum gates (or two-levelgates or Givens rotations). These gates are computa-tionally costly and we wish to replace these expensivegates with more efficient ones. For k = 0, 1, 2, . . . , n−1,we assign the cost or weight k to a quantum gate withk controls. We develop an algorithm to decompose anyd-by-d matrix into a combination of these gates withpotentially minimal cost. We write a program that willimplement this algorithm. Possible modifications to theproblem will also be presented.

On trumping

David Kribs, Rajesh Pereira, Sarah Plosker

Trumping is a partial order on vectors in Rn that gener-alizes the more familiar concept of majorization. Bothof these partial orders have recently been consideredin quantum information theory and shown to play animportant role in entanglement theory. We explore con-nections between trumping and power majorization, ageneralization of majorization that has been studied inthe theory of inequalities. We also discuss extendingthe idea of trumping to continuous functions on Rn,analogous to the notion of continuous majorization.

On maximally entangled states

Edward Poon

A mixed state is said to be maximally entangled if itis a convex combination of maximally entangled purestates. We investigate the structure of maximally en-tangled states on a bipartite system, and of the spacespanned by such states. We then apply our resultsto the problem of finding the linear preservers of suchstates.

An envelope for the spectrum of a matrix

Panayiotis Psarrakos, Michael Tsatsomeros

We introduce and study an envelope-type region E(A)in the complex plane that contains the eigenvalues of agiven n×n complex matrix A. E(A) is the intersectionof an infinite number of regions defined by cubic curves.The notion and method of construction of E(A) extendthe notion of the numerical range of A, F (A), whichis known to be an intersection of an infinite numberof half-planes; as a consequence, E(A) is contained inF (A) and represents an improvement in localizing thespectrum of A.

Constructing optimal entanglement witnesses bypermutations

Xiaofei Qi, Jinchuan Hou

A linear map ΦD : Mn → Mn is called a D-type mapif ΦD has the form (aij) 7−→ diag(f1, f2, ..., fn)− (aij)with (f1, f2, ..., fn) = (a11, a22, ..., ann)D for an n × nnonnegative matrix D = (dij) (i.e., dij > 0 for all i, j).For any permutation π of 1, 2, . . . , n and t > 0, letΦt,π : Mn → Mn be the D-type map of the form withD = (n − t)In + tPπ. In this talk, we give necessaryand sufficient conditions for Φt,π becoming positive, in-decomposable and optimal, respectively.

A Linear Algebraic Approach in ConstructingQuantum Error Correction Code

Raymond Nung-Sing Sze

In the study of quantum error correction, quantum in-formation scientists are interested in constructing prac-tical and efficient schemes for overcoming the decoher-ence in quantum systems. In particular, researcherswould like to obtain useful quantum correction codesand construct simple encoding and decoding quantumcircuits. These problems can be reformulated and stud-ied in a linear algebraic approach. In this talk, wereview some recent work in quantum error correctionalong this direction.

This talk is based on joint work with Chi-Kwong Li,Mikio Nakahara, Yiu-Tung Poon, and Hiroyuki Tomita.

Gradient Flows for the Minimum Distance tothe Sum of Adjoint Orbits

Xuhua Liu, Tin-Yau Tam

Let G be a connected semisimple Lie group and g itsLie algebra. Let g = k ⊕ p be the Cartan decomposi-tion corresponding to a Cartan involution θ of g. TheKilling form B induces a positive definite symmetric bi-linear form Bθ on g defined by Bθ(X,Y ) = −B(X, θY ).Given A0, A1, . . . , AN ∈ g, we consider the optimizationproblem

minki∈K

∥∥∥∥∥

N∑

i=1

Ad (ki)Ai −A0

∥∥∥∥∥ ,

where the norm ‖ · ‖ is induced by Bθ and K is the con-nected subgroup of G with Lie algebra k. We obtainthe gradient flow of the corresponding smooth func-tion on the manifold K × · · · × K. Our results giveunified extensions of several results of Li, Poon, andSchulte-Herbruggen. They are also true for reductiveLie groups.

Higher dimensional witnesses for entanglementin 2N–qubit chains

Justyna Zwolak, Dariusz Chruscinski

Entanglement is considered to be the most nonclassicalmanifestation of quantum physics and lies at the cen-ter of interest for physicists in the 21st century, as it isfundamental to future quantum technologies. Charac-terization of entanglement (i.e., delineating entangled

48

Page 51: CMR17 - ILAS 2013

from separable states) is equivalent to the characteri-zation of positive, but not completely positive (PnCP),maps over matrix algebras.

For quantum states living in 2 × 2 and 2 × 3 dimen-sional spaces there exists a complete characterization ofthe separability problem (due to the celebrated Peres-Horodecki criterion). For increasingly higher dimen-sions, however, this task becomes increasingly difficult.There is considerable effort devoted to constructing PnCPmaps, but a general procedure is still not known.

In our research, we are developing PnCPmaps for higherdimensional systems. For instance, we recently general-ized the Robertson map in a way that naturally mesheswith 2N qubit systems, i.e., its structure respects the22N growth of the state space. Using linear algebratechniques, we proved that this map is positive, but notcompletely positive, and also indecomposable and opti-mal. As such, it can be used to create witnesses that de-tect (bipartite) entanglement. We also determined therelation of our maps to entanglement breaking channels.As a byproduct, we provided a new example of a familyof PPT (Positive Partial Transpose) entangled states.We will discuss this map as well as the new classes ofentanglement witnesses.

Linear Algebra, Control, and Optimization

Linear Systems: A Measurement Based Approach

Lee Keel, Shankar Bhattacharyya

We present recent results obtained by us on the analy-sis, synthesis and design of systems described by linearequations. As is well known linear equations arise inmost branches of science and engineering as well as so-cial, biological and economic systems. The novelty ofour approach is that no models of the system are as-sumed to be available, nor are they required. Instead,we show that a few measurements made on the systemcan be processed strategically to directly extract designvalues that meet specifications without constructing amodel of the system, implicitly or explicitly. We illus-trate these new concepts by applying them to linear DCand AC circuits, mechanical, civil and hydraulic sys-tems, signal flow block diagrams and control systems.These applications are preliminary and suggest manyopen problems.

Our earlier research has shown that the representationof complex systems by high order models with manyparameters may lead to fragility that is, the drasticchange of system behaviour under infinitesimally smallperturbations of these parameters. This led to researchon model-free measurement based approaches to design.The results presented here are our latest effort in thisdirection and we hope they will lead to attractive alter-natives to model-based design of engineering and othersystems.

A Hybrid Method for Time-Delayed Robust and

Minimum Norm Quadratic Partial EigenvalueAssignment

Biswa Datta

The quadratic partial eigenvalue assignment concernsreassigning a small set of eigenvalues of a large quadraticmatrix pencil to suitably chosen ones by using feedbackcontrol, while leaving the remaining large number ofeigenvalues and the associated eigenvectors unchanged.The problem arises in active vibration control of struc-tures. For practical effectiveness, the feedback matriceshave to be constructed in such a way that the both thefeedback norms and the closed-loop eigenvalue sensitiv-ity are minimized. These minimization problems giverise to difficult nonlinear optimization problems. Mostof the existing solutions of these problems are based onthe knowledge of system matrices, and do not make useof the “receptances” which are readily available frommeasurements. Furthermore, work on these eigenvalueassignment problems with time-delay in control inputare rare.

In this paper, we propose a novel hybrid approach com-bining both system matrices and receptances to solvethese problems both with and without-time delay. Thishybrid approach is computationally more efficient thanthe approaches based on system matrices or the recep-tances alone. The numerical effectiveness of the pro-posed methods are demonstrated by results of numeri-cal experiments.

Calculating the H∞-norm using the implicit de-terminant method.

Melina Freitag

The H∞-norm of a transfer function is an importantproperty for measuring robust stability. In this talkwe present a new fast method for calculating this normwhich uses the implicit determinant method. The al-gorithm is based on finding a two-dimensional Jordanblock corresponding to a pure imaginary eigenvalue in acertain two-parameter Hamiltonian eigenvalue problemintroduced by Byers (SIAM J. Sci. Statist. Comput., 9(1988), pp. 875-881). At each step a structured linearsystem needs to be solved which can be done efficientlyby transforming the problem into controller Hessenbergform. Numerical results show the performance of thisalgorithm for several examples and comparison is madewith other methods for the same problem.

This is joint work with Paul Van Dooren (Louvain-la-Neuve) and Alastair Spence (Bath).

Variational Integrators for Matrix Lie Groupswith Applications to Geometric Control

Taeyoung Lee, Melvin Leok, N. Harris McClamroch

The geometric approach to mechanics serves as the the-oretical underpinning of innovative control methodolo-gies in geometric control theory. These techniques allowthe attitude of satellites to be controlled using changes

49

Page 52: CMR17 - ILAS 2013

in its shape, as opposed to chemical propulsion, andare the basis for understanding the ability of a fallingcat to always land on its feet, even when released in aninverted orientation.

We will discuss the application of geometric structure-preserving numerical schemes to the optimal controlof mechanical systems. In particular, we consider Liegroup variational integrators, which are based on a dis-cretization of Hamilton’s principle that preserves theLie group structure of the configuration space. The im-portance of simultaneously preserving the symplecticand Lie group properties is also demonstrated.

Optimal control of descriptor systems

Peter Kunkel, Volker Mehrmann, Lena Scholz

We discuss the numerical solution of linear and nonlin-ear optimal control problems in descriptor form. Wederive the necessary optimality conditions and discusstheir solvability. The resulting optimality systems havea self-adjoint structure that can be used to identify theunderlying symplectic flow and to compute the optimalsolution in a structure preserving way.

Stability Optimization for Polynomials and Ma-trices

Michael Overton

Suppose that the coefficients of a monic polynomial orentries of a square matrix depend affinely on parame-ters, and consider the problem of minimizing the rootradius (maximum of the moduli of the roots) or rootabscissa (maximum of their real parts) in the polyno-mial case and the spectral radius or spectral abscissain the matrix case. These functions are not convex andthey are typically not locally Lipschitz near minimizers.We first address polynomials, for which some remark-able analytical results are available in one special case,and then consider the more general case of matrices,focusing on the static output feedback problem arisingin control of linear dynamical systems. The polynomialresults are joint with V. Blondel, M. Gurbuzbalabanand A. Megretski.

On finding the nearest stable system

Francois-Xavier Orban de Xivry, Yurii Nesterov, PaulVan Dooren

The nearest stable matrix to an unstable one is a diffi-cult problem independently of the norm used. This isdue to the nonconvex nature of the set of stable matri-ces as described by the Lyapunov equation and to thefact that we have to constrain simultaneously all theeigenvalues in the left-hand side of the complex plane.This talk presents a gradient based method to solve areformulation of this problem for the Frobenius norm.The objective function features a smoothing term whichenables us to obtain a convergence analysis which shows

that a local minimum of the proposed reformulation isattained.

Subspace methods for computing the pseudospec-tral abscissa and the stability radius

Daniel Kressner, Bart Vandereycken

The pseudospectral abscissa and the stability radius arewell-established tools for quantifying the stability of amatrix under unstructured perturbations. Based onfirst-order eigenvalue expansions, Guglielmi and Over-ton [SIAM J. Matrix Anal. Appl., 32 (2011), pp. 1166-1192] recently proposed a linearly converging iterativemethod for computing the pseudospectral abscissa. Inthis talk, we propose to combine this method and itsvariants with subspace acceleration. Each extractionstep computes the pseudospectral abscissa of a smallrectangular matrix pencil, which is comparably cheapand guarantees monotonicity. We prove local super-linear convergence for the resulting subspace methods.Moreover, these methods extend naturally to comput-ing the stability radius. A number of numerical experi-ments demonstrate the robustness and efficiency of thesubspace methods.

Linear and Nonlinear Perron-Frobenius Theory

Policy iteration algorithm for zero-sum two playerstochastic games: complexity bounds involvingnonlinear spectral radii

Marianne Akian, Stephane Gaubert

Recent results of Ye and Hansen, Miltersen and Zwickshow that policy iteration for one or two player zero-sumstochastic games, restricted to instances with a fixedand uniform discount rate, is strongly polynomial. Weextend these results to stochastic games in which thediscount rate can vanish, or even be negative. The re-striction is now that the spectral radii of the nonneg-ative matrices associated to all strategies are boundedfrom above by a fixed constant strictly less than 1. Thisis the case in particular if the discount rates are non-negative and if the life time for any strategy is boundeda priori. We also obtain an extension to a class of er-godic (mean-payoff) problems with irreducible stochas-tic transition matrices. Our results are based on meth-ods of nonlinear Perron-Frobenius theory.

Infimum over certain conjugated operator normsas a generalization of the Perron–Frobenius eigen-value

Assaf Goldberger

Let || · || denote the operator norm for a Euclideanspace. If A is a square matrix, we study the quantityµG(A) := infD∈G ||DAD−1|| where G is a subgroup ofmatrices. We mainly focus on two cases: Where G is

50

Page 53: CMR17 - ILAS 2013

a subgroup of diagonal matrices, or where G is a com-pact subgroup. In both cases we are able to show thatµG(A) is a solution to a nonlinear eigenvalue problem onA. The value µG(A) satisfies few properties similar tothe Perron–Frobenius value for nonnegative A, such asinheritence, Collatz-Wielandt bounds, and more. Partof this work is based on joint work with Miki Neumann.

Generalized Perron-Frobenius property and pos-itivity of matrix spectra

Olga Kushel

We study a generalization of the Perron-Frobenius prop-erty with respect to a proper cone in Rn. We examinematrices which possess the generalized Perron-frobeniusproperty together with their compound matrices. Wegive examples of such matrices and apply the obtainedtheory to some matrix classes defined by determinantalinequalities.

Hamiltonian actions on the cone of positive def-inite matrices

Yongdo Lim

The semigroup of Hamiltonians acting on the cone ofpositive definite matrices via linear fractional transfor-mations satisfies the Birkhoff contraction formula forthe Thompson metric. In this talk we describe the ac-tion of the Hamiltonians lying in the boundary of thesemigroup. This involves in particular a construction oflinear transformations leaving invariant the cone of pos-itive definite matrices (strictly positive linear mappings)parameterized over all square matrices. Its invertibilityand relation to the Lyapunov and Stein operators areinvestigated in detail.

Denjoy-Wolff Type Theorems on Cones

Brian Lins

The classical Denjoy-Wolff theorem states that if an an-alytic self-map of the open unit disk in C has no fixedpoint, then all iterates of that map converge to a sin-gle point on the boundary of the unit disk. A directgeneralization of this theorem applies to normalized it-erates of order-preserving homogeneous of degree onemaps f : intC → intC when C is a strictly convexclosed cone with nonempty interior in Rn. If the mapf has no eigenvector in intC, then all normalized iter-ates of f converge to a single point on the boundaryof C. A further generalization of this result applieswhen f is defined on polyhedral cones. In that case, alllimit points of the normalized iterates are contained ina convex subset of the boundary of C. We will surveysome of the recent progress attempting to extend theseDenjoy-Wolff type theorems to maps on other cones,particularly symmetric cones.

The compensation game and the real nonnega-tive inverse eigenvalue problem

Alberto Borobia, Julio Moro, Ricardo Soto

A connection is drawn between the problem of char-acterizing all possible real spectra of entrywise non-negative matrices (the so-called real nonnegative in-verse eigenvalue problem) and a game of combinatorialnature, related with compensation of negativity. Thegame is defined in such a a way that, given a set Λof real numbers, if the game with input Λ has a so-lution then Λ is realizable, i.e., it is the spectrum ofsome entrywise nonnegative matrix. This defines a spe-cial kind of realizability, which we call C-realizability.After observing that the set of all C-realizable sets isa strict subset of the set of realizable ones, we showthat it strictly includes, in particular, all sets satisfyingseveral previously known sufficient realizability condi-tions in the literature. Furthermore, the proofs of theseconditions become much simpler when approached fromthis new point of view.

Generalizing the Krein-Rutman Theorem

Roger Nussbaum

The original Krein-Rutman theorem considers a closed,total cone in a real Banach space X and a continuous,compact linear map f of X to X such f(C) is containedin C. If r¿0, where r denotes the spectral radius of f, theKrein-Rutman theorem asserts that f has an eigenvec-tor in C with eigenvalue r. For applications in a vari-ety of directions (e.g., game theory, machine schedulingand mathematical biology)it is desirable to consider aclosed cone C in a real Banach space and a map f ofC to C which is continuous and homogeneous of de-gree one and preserves the partial ordering induced byC. To extend the Krein-Rutman theorem to this con-text, it is necessary to find appropriate definitions of the”cone spectral radius of f”, r(f;C), and the ”cone essen-tial spectral radius of f”, (rho)(f;C). We discuss whatseem appropriate definitions of r(f;C) and (rho)(f;C)and mention some theorems and open questions con-cerning these quantities. If (rho)(f;C) is strictly lessthan r(f;C), we conjecture that f has an eigenvector inC with eigenvalue r(f;C). We describe some theoremswhich support this conjecture and mention some con-nections to an old,open conjecture in asymptotic fixedpoint theory.

Constructing new spectra of nonnegative matri-ces from known spectra of nonnegative matrices

Helena Smigoc, Richard Ellard

The question, which lists of complex numbers are thespectrum of some nonnegative matrix, is known as thenonnegative inverse eigenvalue problem (NIEP). Thelist of complex numbers that is the spectrum of a non-negative is called realizable.

In the talk we will present some methods on how to

51

Page 54: CMR17 - ILAS 2013

construct new realizable spectra from known realizablespectra. In particular, let

σ = (λ1, a+ ib, a− ib, λ4, . . . , λn)

be realizable with the Perron eigenvalue λ1. We willshow how to use σ to construct some new realizablelists of the from (µ1, µ2, µ3, µ4, λ4, . . . , λn). In the sec-ond part of the talk we will start with a realizable list

σ = (λ1, λ2, λ3, . . . , λn)

with the Perron eigenvalue λ1 and a real eigenvalueλ2, and we will construct realizable spectra of the from(µ1, µ2, . . . , µk, λ3, . . . , λn).

Linear Complementarity Problems and Beyond

The central role of bimatrix games and LCPswith P-matrices in the computational analysisof Lemke-resolvable LCPs

Ilan Adler

A major effort in linear complementarity theory hasbeen the study of the Lemke algorithm. Over the years,many classes of linear complementarity problems (LCPs)were proven to be resolvable by the Lemke algorithm.For the majority of these classes it has been shown thatthese problems belong to the complexity class Polyno-mial Parity Arguments on Directed graphs (PPAD) andthus can be theoretically reduced to bimatrix gameswhich are known to be PPAD-complete. In this talk wepresent a simple direct reduction of Lemke-resolvableLCPs to bimatrix games. Next, we present a reduc-tion via perturbation of several major classes of PPADLCPs, which are not known to be PPAD-complete, toLCPs with P-matrices. We conclude by discussing amajor open problem in LCP theory regarding the ques-tion of how hard it is to solve LCP problems with P-matrices.

Three Solution Properties of a Class of LCPs:Sparsity, Elusiveness, and Strongly PolynomialComputability

Ilan Adler, Richard Cottle, Jong-Shi Pang

We identify a class of Linear Complementarity Prob-lems (LCPs) that are solvable in strongly polynomialtime by Lemke’s Algorithm (Scheme 1) or by the Para-metric Principal Pivoting Method (PPPM). This algo-rithmic feature for the class of problems under consid-eration here is attributable to the proper selection ofthe covering vectors in Scheme 1 or the parametric di-rection vectors in the PPPM which lead to solutions oflimited and monotonically nondecreasing support size,hence sparsity. These and other LCPs may very wellhave multiple solutions, many of which are unattain-able by either algorithm and thus are said to be elusive.

A Pivotal Method for Affine Variational Inequal-

ities (PATHAVI)

Youngdae Kim, Michael Ferris,

PATHAVI is a solver for affine variational inequalities(AVIs) defined over a polyhedral convex set. It com-putes a solution by finding a zero of the normal map as-sociated with a given AVI, which in this case is a piece-wise linear equation. We introduce a path-following al-gorithm for solving the piecewise linear equation of theAVI using a pivotal method. The piecewise linear equa-tion uses a starting point (an initial basis) computed us-ing the Phase I procedure of the simplex method fromany linear programming package. The main code fol-lows a piecewise linear manifold in a complementarypivoting manner similar to Lemke’s method. Imple-mentation issues including the removal of the linealityspace of a polyhedral convex set using LU factorization,repairing singular bases, and proximal point strategieswill be discussed. Some experimental results comparingPATHAVI to other codes based on reformulations as acomplementarity problem will be given.

Uniform Lipschitz property of spline estimationof shape constrained functions

Jinglai Shen

In this talk, we discuss uniform properties of struc-tured complementarity problems arising from shape re-stricted estimation and asymptotic analysis. Specifi-cally, we consider B-spline estimation of a convex func-tion in nonparametric statistics. The optimal splinecoefficients of this estimator give rise to a family of sizevarying complementarity problems and Lipschitz piece-wise linear functions. In order to show the uniform con-vergence of the estimator, we establish a critical uniformLipschitz property in the infinity norm. Its implicationsin asymptotic statistical analysis of the convex B-splineestimator are revealed.

Linear Least Squares Methods: Algorithms, Anal-ysis, and Applications

Structured Condition Numbers for a Linear Func-tional of the Tikhonov Regularized Solution

Huaian Diao, Yimin Wei

Both structured componentwise and normwise pertur-bation analysis of the Tikhonov regularization are pre-sented. The structured matrices include: Toeplitz, Han-kel, Vandermonde, and Cauchy matrices. Structurednormwise, mixed and componentwise condition num-bers for the Tikhonov regularization are introduced andexact expressions are derived. By means of the powermethod, the fast condition estimation algorithms areproposed. The structured condition numbers and per-turbation bounds are tested on some numerical exam-ples and compared with unstructured ones. The numer-ical examples illustrate the structured mixed conditionnumbers give sharper perturbation bounds than others.

52

Page 55: CMR17 - ILAS 2013

Inner-iteration preconditioning for the minimum-norm solution of rank-deficient linear systems

Keiichi Morikuni, Ken Hayami

We propose inner-iteration preconditioning for the right-preconditioned generalized minimal residual (AB-GMRES)method for solving the linear system of equations

Ax = b, (2)

where A ∈ Rm×n may be rank-deficient: rankA <min(m,n), and the system is consistent: b ∈ R(A).Here, R(A) denotes the range of A. Preconditioners forKrylov subspace methods for (1) have been less stud-ied in the rank-deficient case compared to the full-rankcase. Here, we are particularly interested in obtain-ing the pseudo-inverse solution of (1), whose Euclideannorm is minimum, and which is useful in applicationssuch as inverse problems and control.

We show that several steps of linear stationary iter-ative methods serve as inner-iteration preconditionersfor AB-GMRES. Numerical experiments on large andsparse problems illustrate that the proposed methodoutperforms the preconditioned CGNE and LSMRmeth-ods in terms of the CPU time, especially in ill-conditionedand rank-deficient cases.

We also show that a sufficient condition for the pro-posed method to determine the pseudo-inverse solutionof (1) without breakdown within r iterations for ar-bitrary initial approximate solution x0 ∈ Rn, wherer = rankA, is that the iteration matrix H of the sta-tionary inner iterations is semi-convergent, i.e., lim

i→∞Hi

exists. The theory holds irrespective of whether A isover- or under-determined, and whether A is of full-rank or rank-deficient. The condition is satisfied by theJacobi and successive over-relaxation (JOR and SOR)methods applied to the normal equations. Moreover,we use efficient implementations of these methods suchas the NE-SOR method as inner iterations without ex-plicitly forming the normal equations.

This is joint work with Ken Hayami of National Insti-tute of Informatics and The Graduate University forAdvanced Studies.

Lattice Basis Reduction: Preprocessing the In-teger Least Squares Problem

Sanzheng Qiao

The integer least squares (ILS) problem arises fromnumerous applications, such as geometry of numbers,cryptography, and wireless communications. It is knownthat the complexity of the algorithms for solving theILS problem increases exponentially with the size of thecoefficient matrix. To circumvent the high complexityproblem, fast approximation methods have been pro-posed. However, the performance, measured by the er-ror rate, of the approximation methods depends on thestructure of the coefficient matrix, specifically, the con-dition number and/or the orthogonality defect. Latticebasis reduction, applied as a preprocessor on the co-efficient matrix, can significantly improve the condition

number and orthogonality defect, thus improve the per-formance. In this talk, after introducing lattices, bases,and basis reduction, we give a brief survey of the lat-tice basis reduction algorithms, then present our recentwork.

Stochastic conditioning of systems of equations

David Titley-Peloquin, Serge Gratton

Given nonsingular A ∈ Rn×n and b ∈ Rn, how sensi-tive is x = A−1b to perturbations in the data? This isa fundamental and well-studied question in numericallinear algebra. Bounds on ‖A−1b − (A + E)−1b‖ canbe stated using the condition number of the mapping(A, b) 7→ A−1b, provided ‖A−1E‖ < 1. If ‖A−1E‖ > 1nothing can be said, as A+E might be singular. Thesewell-known results answer the question: how sensitiveis x to perturbations in the worst case? However, theysay nothing of the typical or average case sensitivity.

We consider the sensitivity of x to random noise E.Specifically, we are interested in properties of the ran-dom variable

µ(A, b, E) = ‖A−1b− (A+ E)−1b‖ = ‖(A+ E)−1Ex‖,

where vec(E) ∼ (0,Σ) follows various distributions. Weattempt to quantify the following:

• How descriptive on average is the known worst-case analysis for small perturbations?

• Can anything be said about typical sensitivity ofx even for large perturbations?

We provide asymptotic results for ‖Σ‖ → 0 as well asbounds that hold for large ‖Σ‖. We extend some of ourresults to structured perturbations as well as to the fullrank linear least squares problem.

Matrices and Graph Theory

Minimum rank, maximum nullity, and zero forc-ing number of simple digraphs

Adam Berliner, Minerva Catral, Leslie Hogben, My Huynh,Kelsey Lied, Michael Young

Extensive work has been done on problems related tofinding the minimum rank among the family of realsymmetric matrices whose off-diagonal zero-nonzero pat-tern is described by a given simple graph G. In this talk,we will discuss the case of simple digraphs, which de-scribe the off-diagonal zero-nonzero pattern of a familyof (not necessarily symmetric) matrices. Furthermore,we will establish cut-vertex reduction formulas for mini-mum rank and zero forcing number for simple digraphs,analyze the effect of deletion of a vertex on minimumrank and zero forcing number, and characterize simpledigraphs whose zero forcing number is very low or veryhigh.

53

Page 56: CMR17 - ILAS 2013

Equitable partitions and the normalized Lapla-cian matrix

Steve Butler

The normalized Laplacian matrix is L = D−1/2(D −A)D−1/2. This matrix is connected to random walksand the eigenvalues of this matrix gives useful informa-tion about a graph, in some cases similar to what theeigenvalues of the adjacency or combinatorial Laplaciangives about a graph. We will review some of these basicproperties and then focus on equitable partitions andtheir relationship to the eigenvalues of the normalizedLaplacian. (An equitable partition is a partitioning ofthe vertices into V1 ∪ V2 · · · ∪ Vk sets such that the thenumber of edges between a vertex v ∈ Vi and Vj isindependent of v.)

Zero Forcing Number and Maximum Nullity ofSubdivided Graphs

Minerva Catral, Leslie Hogben, My Huynh, Kirill Lazeb-nik, Michael Young, Anna Cepek, Travis Peters

Let G = (V,E) be a simple, undirected graph with ver-tex set V and edge set E. For an edge e = uv of G,define Ge to be the graph obtained from G by insertinga new vertex w into V , deleting the edge e and insert-ing edges uw and wv. We say that that the edge e hasbeen subdivided and call Ge an edge subdivision of G.A complete subdivision graph s(G) is obtained from agraph G by subdividing every edge of G once. In thistalk, we investigate the maximum nullity and zero forc-ing number of graphs obtained by a finite number ofedge subdivisions of a given graph, and present resultsthat support the conjecture that for all graphs G, themaximum nullity of s(G) and the zero forcing numberof s(G) coincide.

The rank of a positive semidefinite matrix andcycles in its graph

Louis Deaett

Given a connected graph G on n vertices, consider ann× n positive semidefinite Hermitian matrix whose ij-entry is nonzero precisely when the ith and jth verticesof G are adjacent, for all i 6= j. The smallest rank ofsuch a matrix is the minimum semidefinite rank of G.If G is triangle-free, a result of Moshe Rosenfeld gives alower bound of n/2 on its minimum semidefinite rank.This talk presents a new proof of this result and usesit as an avenue to explore a more general connectionbetween cycles in a graph and its minimum semidefiniterank.

On the minimum number of distinct eigenvaluesof a graph

Shaun Fallat

For a graph G, the minimum number of distinct eigen-

values, where the minimum is taken over all matrices inS(G) is denoted by q(G). This parameter is very im-portant to understanding the inverse eigenvalue prob-lem for graphs and, perhaps, has not received as muchattention as its counterpart, the maximum multiplicity.In this talk, I will discuss graphs attaining extreme val-ues of q and will focus on certain families of graphs,including joins of graphs, complete bipartite graphs,and cycles. This work is joint with other members ofthe Discrete Math Research Group at the University ofRegina.

Refined Inertias of Tree Sign Patterns

Colin Garnett, Dale Olesky, Pauline van den Driessche

The refined inertia (n+, n−, nz, 2np) of a real matrix isthe ordered 4-tuple that subdivides the number n0 ofeigenvalues with zero real part in the inertia (n+, n−, n0)into those that are exactly zero (nz) and those that arepure imaginary (2np). For n > 2, the set of refined in-ertias Hn = (0, n, 0, 0), (0, n− 2, 0, 2), (2, n− 2, 0, 0) isimportant for the onset of Hopf bifurcation in dynam-ical systems. In this talk we discuss tree sign patternsof order n that require or allow the refined inertias Hn.We describe necessary and sufficient conditions for atree sign pattern to require H4. We prove that if a starsign pattern requires Hn, then it must have exactly onezero diagonal entry associated with a leaf in its digraph.

The Minimum Coefficient of Ergodicity for aMarkov Chain with a Specified Directed Graph

Steve Kirkland

Suppose that T is an n×n stochastic matrix. The func-tion τ(T ) = 1

2maxi,j=1,...,n||(ei − ej)

TT ||1 is knownas a coefficient of ergodicity for T , and measures therate at which the iterates of a Markov chain with tran-sition matrix T converge to the stationary distributionvector. Many Markov chains are equipped with an un-derlying combinatorial structure that is described by adirected graph, and in view of that fact, we considerthe following problem: given a directed graph D, findτmin(D) ≡ min τ(T ), where the minimum is taken overall stochastic matrices T whose directed graph is sub-ordinate to D.

In this talk, we characterise τmin(D) as the solution to alinear programming problem. We then go on to providebounds on τmin(D) in terms of the maximum outdegreeof D and the maximum indegree of D. A connection isestablished between the equality case in a lower boundon τmin(D) and certain incomplete block designs.

The Dirac operator of a finite simple graph

Oliver Knill

The Dirac operator D of a finite simple graph G is a nx n matrix, where n is the total number of cliques inthe graph. Its square L is the Laplace-Beltrami opera-

54

Page 57: CMR17 - ILAS 2013

tor, a block diagonal matrix which leaves linear spacesof p-forms invariant. The eigenvalues of L come inpairs; each nonzero fermionic eigenvalue appears alsoas a bosonic eigenvalue. This super symmetry leads tothe McKean Singer result that the super trace of theheat kernel str(exp(-tL)) is the Euler characteristic ofG for all t. We discuss combinatorial consequences forthe graph related to random walks and trees in the sim-plex graph and the use of D to compute the cohomologyof the graph via the Hodge theorem. The matrix D isuseful also for studying spectral perturbation problemsor to study discrete partial differential equations likethe wave equation u”=-Lu or heat equation u’=-Lu onthe graph. We will show some animations of such evo-lutions.

Logarithmic Tree Numbers for Acyclic Complexes

Woong Kook, Hyuk Kim

For a d-dimensional cell complex Γ with Hi(Γ) = 0 for−1 6 i < d, an i-dimensional tree is a non-empty col-lection B of i-dimensional cells in Γ such that Hi(B ∪Γ(i−1)) = 0 and w(B) := |Hi−1(B ∪ Γ(i−1))| is finite,where Γ(i) is the i-skeleton of Γ. Define the i-th tree-number to be ki :=

∑B w(B)2, where the sum is over

all i-dimensional trees. In this talk, we will show thatif Γ is acyclic and ki > 0 for −1 6 i 6 d, then kiand the combinatorial Laplace operators ∆i are relatedby∑di=−1 ωi x

i+1 = (1 + x)2∑d−1i=0 κix

i, where ωi =log det∆i and κi = log ki. We will discuss various ap-plications of this equation.

Minimum rank for circulant graphs

Seth Meyer

There are many known bounds on the minimum rankof a graph, the two we will focus on are the minimumsemi-definite rank and the zero forcing number. Whilemuch is known about these parameters for small graphs,families of large graphs remain an object of study. Inthis talk we will discuss bounds on the minimum rankof graphs with circulant adjacency matrices, more com-monly known as circulant graphs.

Matrix Ranks in Graph Theory

T. S. Michael

Matrix ranks over various fields have several applica-tions in graph theory and combinatorics—for instance,in establishing the non-existence of a graph or the non-isomorphism of two graphs. We discuss some old andnew theorems along these lines with an emphasis ontournaments and graph decompositions.

The Algebraic Connectivity of Planar Graphs

Jason Molitierno

The Laplacian matrix for a graph on n vertices labeled1, . . . , n is the n×n matrix L = [ℓI,j ] in which ℓii is thedegree of vertex i and ℓij , for i 6= j, is -1 if vertices iand j are adjacent and 0 otherwise. Since L is positivesemidefinite and singular, we can order the eigenvalues0 = λ1 6 λ2 6 . . . ,6 λn. The eigenvalue λ2 is knownas the algebraic connectivity of a graph as it gives ameasure of how connected the graph is. In this talk, wederive an upper bound on the algebraic connectivity ofplanar graphs and determine all planar graphs in whichthe upper bound is achieved. If time permits, we willextend these results in two directions. First, we will de-rive smaller upper bounds on the algebraic connectivityof planar graphs with a large number of vertices. Sec-ondly, we will consider upper bounds on the algebraicconnectivity of a graph as a function of its genus.

Using the weighted normalized Laplacian to con-struct cospectral unweighted bipartite graphs

Steven Osborne

The normalized Laplacian matrix of a simple graph Gis

L(G)uv :=

1 if u = v and deg v 6= 0,− 1√

deg u deg vif u ∼ v, and

0 otherwise.

We say two graphs G and H are cospectral with respectto the normalized Laplacian if L(G) and L(H) have thesame spectrum including multiplicity. Even for smalln, it is difficult to build two graphs on n vertices whichare cospectral. We can extend the definition of the nor-malized Laplacian to weighted graphs. We show thatthe spectrum of certain weighted graphs is related tothe spectrum of related unweighted graphs. This allowsus to construct unweighted graphs which are cospectralwith respect to the normalized Laplacian.

The λ-τ problem for symmetric matrices with agiven graph

Bryan Shader, Keivan Hassani Monfared

Let λ1 < · · · < λn, τ1 < · · · < τn−2 be 2n − 2 realnumbers that satisfy the second order Cauchy interlac-ing inequalities λi < τi < λi+2 and a nondegeneracycondition λi+1 6= τi. Given a connected graph G onn vertices where vertices 1 and 2 are adjacent, it isproven that there is a real symmetric matrix A witheigenvalues λ1, . . . , λn, and eigenvalues of A(1, 2) areτ1, . . . , τn−2, such that graph of A is G, provided eachconnected component of the graph obtained from G bydeleting the edge 1–2 has a sufficient number of vertices.

A Simplified Principal Minor Assignment Prob-lem

Pauline van den Driessche

Given a vector u ∈ R2n , the principal minor assign-

55

Page 58: CMR17 - ILAS 2013

ment problem asks when is there a matrix of order nhaving its 2n principal minors (in lexicographic order)equal to u. Given a sequence r0r1 · · · rn with rj ∈ 0, 1,this simplified principal minor assignment problem askswhen is there a real symmetric matrix of order n havinga principal submatrix of rank k iff rk = 1 for 0 6 k 6 n.Several families of matrices are constructed to attaincertain classes of sequences, with particular emphasison adjacency matrices of graphs. Other classes of se-quences are shown to be not attainable. This is jointwork with R.A. Brualdi, L. Deaett, D.D. Olesky.

Generalized Propagation Time of Zero Forcing

Michael Young,

Zero forcing is a type of graph propagation that startswith a coloring of the vertices as white and blue and ateach step any vertex colored blue with a unique neigh-bor colored white“forces” the color of the white vertexto be come blue. In this talk, we look at what happensto the length of time that it takes for all vertices tobecome blue when the size of the initial set of verticescolored blue is altered. We also discuss a tight relation-ship between the zero forcing number and the numberof edges in the graph.

Matrices and Orthogonal Polynomials

Optimal designs, orthogonal polynomials and ran-dom matrices

Holger Dette

The talk explains several relations between different ar-eas of mathematics: Mathematical statistics, randommatrices and special functions. We give a careful intro-duction in the theory of optimal designs, which are usedto improve the accuracy of statistical inference withoutperforming additional experiments. It is demonstratedthat for certain regression models orthogonal polynomi-als play an important role in the construction of optimaldesigns. In the next step these results are connectedwith some classical facts from random matrix theory.

In the third part of this talk we discuss some new resultson special functions and random matrices. In particularwe analyze random band matrices, which generalize theclassical Gauschen ensemble. We show that the ran-dom eigenvalues of such matrices behave similarly asthe deterministic roots of matrix orthogonal polynomi-als with varying recurrence coefficients. We study theasymptotic zero distribution of such polynomials anddemonstrate that these results can be used to find theasymptotic properties of the spectrum of random bandmatrices.

Finite-difference Gaussian rules

Vladimir Druskin

Finite-difference Gaussian rules (FDGR) a. k. a. opti-mal or spectrally matched grids were introduced for ac-curate approximation of Neumann-to-Dirichlet (NtoD )maps of second order PDEs. The technique uses simplesecond order finite-difference approximations with opti-mized placement of the grid points yielding exponentialsuperconvergence of the finite-difference NtoD. The factthat the NtoD map is well approximated makes the ap-proach ideal for inverse problems and absorbing bound-ary conditions. We discuss connection of the FDGR tothe Jacobi inverse eigenvalue problems, Stieltjes contin-ued fractions, rational approximations and some otherdifferent classical techniques. Contributors: Sergey As-vadurov, Liliana Borcea, Murthy Guddatti, FernandoGuevara Vasquez, David Ingerman, Leonid Knizhner-man, Alexander Mamonov and Shari Moskow

Wronskian type determinants of orthogonal poly-nomials, Selberg type formulas and constant termidentities

Antonio Duran

Determinants whose entries are orthogonal polynomi-als is a long studied subject. One can mention Turaninequality for Legendre polynomials and its general-izations, specially that of Karlin and Szego on Han-kel determinants whose entries are ultraspherical, La-guerre, Hermite, Charlier, Meixner, Krawtchouk andother families of orthogonal polynomials. Karlin andSzego’s strategy was to express these Hankel determi-nants in terms of the Wronskian of certain orthogo-nal polynomials of another class. B. Leclerc extendedthis result by stating that a Wronskian of orthogonalpolynomials is always proportional to a Hankel deter-minant whose elements form a new sequence of poly-nomials (not necessarily orthogonal). The purpose ofthis talk is to show a broad extension of this result forany linear operator T acting in the linear space of poly-nomials and satisfying that deg(T (p)) = deg(p) − 1,for all polynomial p. In such a case, we establish aprocedure to compute the Wronskian type n× n deter-minant det

(T i−1(pm+j−1(x))

)ni,j=1

, where (pn)n is any

sequence of orthogonal polynomials. For T = d/dx werecover Leclerc’s result, and for T = ∆ (the first orderdifference operator) we get some nice symmetries forCasorati determinants of classical discrete orthogonalpolynomials. We also show that for certain operatorsT , the procedure is related to some Selberg type inte-grals and constant term identities of certain multivari-ate Laurent expansions.

Matrix Orthogonal Polynomials in Bivariate Or-thogonal Polynomials.

Jeff Geronimo

The theory matrix orthogonal polynomials sheds muchlight on the theory of Bivariate orthogonal polynomi-als on the bicircle and on the square especially in thecontext of bivariate Bernstein-Szego measures. We willreview the theory of matrix orthogonal polynomials and

56

Page 59: CMR17 - ILAS 2013

elaborate on the above connection.

The tail condition for Leonard pairs

Edward Hanson

Let V denote a vector space with finite positive dimen-sion. We consider an ordered pair of linear transforma-tions A : V → V and A∗ : V → V that satisfy (i) and(ii) below:

(i) There exists a basis for V with respect to whichthe matrix representing A is irreducible tridiago-nal and the matrix representing A∗ is diagonal.

(ii) There exists a basis for V with respect to whichthe matrix representing A∗ is irreducible tridiag-onal and the matrix representing A is diagonal.

We call such a pair a Leonard pair on V . In this talk,we will discuss two characterizations of Leonard pairsthat utilize the notion of a tail. This notion is borrowedfrom algebraic graph theory.

Biradial orthogonal polynomials and ComplexJacobi matrices

Marko Huhtanen, Allan Peramaki

Self-adjoint antilinear operators have been regarded asyielding an analogue of normality in the antilinear case.This turns out to be the case also for orthogonal polyno-mials. Orthogonality based on a three term recurrencetakes place on so-called biradial curves in the complexplane. Complex Jacobi matrices arise.

Laurent orthogonal polynomials and a QR algo-rithm for pentadiagonal recursion matrices

Carl Jagels

The Laurent-Arnoldi process is an analog of the stan-dard Arnoldi process applied to the extended Krylovsubspace. It produces an orthogonal basis for the sub-space along with a generalized Hessenberg matrix whoseentries consist of the recursion coefficients. As in thestandard case, the application of the process to certaintypes of linear operators results in recursion formulaswith few terms. One instance of this occurs when theoperator is Hermitian. This case produces an analog ofthe Lanczos process in which the accompanying orthog-onal polynomials are now Laurent polynomials. Theprocess applied to the approximation of matrix func-tions results in a rational approximation method thatconverges more quickly for an important class of func-tions than does the polynomial based Lanczos method.However the recursion matrix the process produces isno longer tridiagonal. It is pentadiagonal. This talkdemonstrates that an analog of the QR algorithm ap-plied to tridiagonal matrices exists for pentadiagonalrecursion matrices and that the orthogonal matrix, Q,is a CMV matrix.

Spectral theory of exponentially decaying per-turbations of periodic Jacobi matrices

Rostyslav Kozhan

We fully characterize spectral measures of exponentiallydecaying perturbations of periodic Jacobi matrices. Thisgives numerous useful applications:

• We establish the “odd interlacing” property of theeigenvalues and resonances (the analogue of Si-mon’s result for Schrodinger operators): betweenany two consecutive eigenvalues there must be anodd number of resonances.

• We solve the inverse resonance problem: eigenval-ues and resonances uniquely determine the Jacobimatrix.

• We obtain the necessary and sufficient conditionson a set of points to be the set of eigenvalues andresonances of a Jacobi matrix.

• We study the effect of modifying an eigenmass ora resonance on the Jacobi coefficients.

Some of the related questions for the Jacobi matriceswith matrix-valued coefficients will also be discussed.

Scalar and Matrix Orthogonal Laurent Polyno-mials in the Unit Circle and Toda Type Inte-grable Systems

Manuel Manas, Carlos Alvarez Fernandez, Gerardo Arizn-abarreta

The Gauss-Borel factorization of a semi-infinite matrixknown as the Cantero-Morales-Velazquez moment ma-trix leads to the algebraic aspects of theory of orthog-onal Laurent polynomials in the unit circle. In partic-ular, we refer to determinantal expressions of the poly-nomials, five term recursion relations and Christoffel-Darboux formulas. At the same time the Gauss-Borelfactorization of convenient “multi-time” deformationsof the moment matrix leads to the integrable theoryof the 2D Toda hierarchy. Hence, we explore how theintegrable elements such as wave functions, Lax andZakharov-Shabat equations, Miwa formalism, bilinearequations and tau functions are related to orthogonalLaurent polynomials in the unit circle. Finally, we ex-tend the study to the matrix polynomial realm wherenow we have right and left matrix orthogonal Laurentpolynomials.

The scalar part of this talk has been recently publishedin Advances in Mathematics 240 (2013) 132193.

CMV matrices and spectral transformations ofmeasures

Francisco Marcellan

In this lecture we deal with spectral transformations ofnontrivial probability measures supported on the unitcircle. In some previous work with L. Garza, when lin-

57

Page 60: CMR17 - ILAS 2013

ear spectral transformations are considered, we have an-alyzed the connection between the corresponding Hes-senberg matrices (GGT matrices) associated with themultiplication operator in terms of the orthonormal poly-nomial bases of the original and the transformed mea-sures, respectively. The aim of our contribution is toconsider the connection between the CMV matrices as-sociated with the multiplication operator with respectto Laurent orthonormal bases corresponding to a non-trivial probability measure supported on the unit circleand a linear spectral transformation of it. We focus ourattention on the Christoffel and Geronimus transforma-tions where the QR and Cholesky factorizations play acentral role. Notice that in the case of Jacobi matrices,which represent the multiplication operator in termsof an orthonormal basis, the analysis of such spectraltransformations (direct and inverse Darboux transfor-mations, respectively) yield LU and UL factorizations.A connection between Jacobi and Hessenberg matricesusing the Szego transformations of measures supportedon the interval [-1,1] and some particular measures sup-ported on the unit circle is also studied.

This is a joint work with M. J. Cantero, L. Moral, andL. Velazquez (Universidad de Zaragoza, Spain).

Rational Krylov methods and Gauss quadrature

Carl Jagels, Miroslav Pranic, Lothar Reichel, Xuebo Yu

The need to evaluate expressions of the form f(A)v orvHf(A)v, where A is a large sparse or structured ma-trix, v is a vector, f is a nonlinear function, and H

denotes transposition and complex conjugation, arisesin many applications. Rational Krylov methods can beattractive for computing approximations of such expres-sions. These methods project the approximation prob-lem onto a rational Krylov subspace of fairly small di-mension, and then solve the small approximation prob-lem so obtained. We are interested in the situationwhen the rational functions that define the rationalKrylov subspace have few distinct poles. We discussthe case when A is Hermitian and an orthogonal ba-sis for the rational Krylov subspace can be generatedwith short recursion formulas. Rational Gauss quadra-ture rules for the approximation of vHf(A)v will bedescribed. When A is non-Hermitian, the recursionscan be described by a generalized Hessenberg matrix.Applications to pseudospectrum computations are pre-sented.

Leonard pairs and the q-Tetrahedron algebra

Paul Terwilliger

A Leonard pair is a pair of diagonalizable linear trans-formations of a finite-dimensional vector space, each ofwhich acts in an irreducible tridiagonal fashion on aneigenbasis for the other one. The Leonard pairs areclassified up to isomorphism, and correspond to theorthogonal polynomials from the terminating branchof the Askey scheme. The most general polynomialsin this branch are the q-Racah polynomials. The q-

Tetrahedron algebra was introduced in 2006. In thistalk we show that each Leonard pair of q-Racah typegives a module for the q-Tetrahedron algebra. We dis-cuss how this module illuminates the structure of theLeonard pair.

A general matrix framework to predict short re-currences linked to (extended) Krylov spaces

Raf Vandebril, Clara Mertens

In many applications, the problems to be solved are re-lated to very large matrices. For system solving, as wellas eigenvalue computations, researchers quite often turnto Krylov spaces to project onto, and as such reduce thesize of the original problem to a feasible dimension. Un-fortunately the complexity of generating a reliable basisfor these Krylov spaces grows quickly, hence the searchfor short recurrences to compute one basis vector afterthe other in a modest amount of computing time.

Typically the search for short recurrences is based oninclusion theorems for the Krylov spaces and on theuse of inner product relations, often resulting in quitecomplicated formulas and deductions. In this lecturewe will predict the existence of short recurrences byexamining the associated matrix structures. We willprovide a general framework relying solely on matrixoperations to rededuce many existing results. Focuswill be on (extended, i.e., involving inverse matrix pow-ers) Krylov spaces for Hermitian, Unitary, and normalmatrices whose conjugate transpose is a rational func-tion of the matrix itself. Based on this framework itwill be possible to easily extract the underlying shortrecurrences of the basis vectors, constructed from theKrylov vectors.

A quantum approach to Khrushchev’s formula

Francisco Grunbaum, Luis Velazquez, Albert Werner,Reinhard Werner

The beginning of the last decade knew a great advancein the theory of orthogonal polynomials on the unit cir-cle due to the work of Sergei Khrushchev, who intro-duced many revolutionary ideas connecting with com-plex analysis and the theory of continued fractions. Thisbody of techniques and results, sometimes known asKhrushchev’s theory, has as a cornerstone the so calledKhrushchev’s formula, which identifies the Schur func-tion of an orthogonal polynomial modification of a mea-sure on the unit circle.

On the other hand, it was recently discovered that or-thogonal polynomials on the unit circle play a signifi-cant role in the study of quantum walks, the quantumanalogue of random walks. In particular the Schur func-tions, which are central to Khrushchev’s theory, turnout to be the mathematical tool which best codifiesthe recurrence properties of a quantum walk. Indeed,Khrushchev’s formula has proved to be an importanttool for the study of quantum recurrence.

58

Page 61: CMR17 - ILAS 2013

Nevertheless, this talk is not devoted to the applica-tions of orthogonal polynomials on the unit circle orKhrushchev’s theory to quantum walks, but the oppo-site. The most interesting connections use to be sym-biotic, and this is one more example. The usefulnessof Schur functions in quantum walks has led to a sim-ple understanding of Khrushchev’s formula using quan-tum matrix diagrammatic techniques. This goes furtherthan giving a quantum meaning to a known mathemati-cal result, but this kind of ‘Feynman diagrammatic’ pro-vides very effective tools to generate new Khrushchev’stype formulas, which end in splitting rules for scalarand matrix Schur functions.

Harmonic oscillators and multivariate Krawtchoukpolynomials

Luc Vinet

A quantum-mechanical model for the multivariate Kraw-tchouk polynomials of Rahman is given with the helpof harmonic oscillators. It entails an algebraic inter-pretation of these polynomials as matrix elements ofrepresentations of the orthogonal groups. For simplic-ity, the presentation will focus on the 3-dimensional casewhere the bivariate Krawtchouk polynomials are associ-ated to representations of the rotation group SO(3) andtheir parameters related to Euler angles. Their charac-terization and applications will be looked at from thisperspective.

(Based on joint work with Alexei Zhedanov.)

Norm-constrained determinantal representationsof polynomials

Anatolii Grinshpan, Dmitry Kaliuzhnyi-Verbovetskyi,Hugo Woerdeman

For every multivariable polynomial p, with p(0) = 1, weconstruct a determinantal representation

p = det(I −KZ),

where Z is a diagonal matrix with coordinate variableson the diagonal and K is a complex square matrix.Such a representation is equivalent to the existence ofKwhose principal minors satisfy certain linear relations.When norm constraints on K are imposed, we give con-nections to the multivariable von Neumann inequality,Agler denominators, and stability. We show that if amultivariable polynomial q, q(0) = 0, satisfies the vonNeumann inequality, then 1 − q admits a determinan-tal representation with K a contraction. On the otherhand, every determinantal representation with a con-tractive K gives rise to a rational inner function in theSchur–Agler class.

Matrices and Total Positivity

Computations with matrices with signed bidiag-

onal decomposition

Alvaro Barreras, Juan Manuel Pena

We present a class of matrices with bidiagonal decom-positions satisfying some sign restrictions. This class ofmatrices contains all nonsingular totally positive matri-ces as well as their inverses and opposite matrices. Itinherits many properties of totally positive matrices. Itis shown that one can compute to high relative accuracytheirs eigenvalues, singular values, inverses, triangulardecompositions and the solution of some linear systemsprovided the bidiagonal decomposition. Backward sta-bility of elimination procedures for solving linear sys-tems is also analyzed.

Quasi-LDU factorization of totally nonpositivematrices.

Rafael Canto, Beatriz Ricarte, Ana Urbano

A matrix is called totally nonpositive (totally negative) ifall its minors are nonpositive (negative) and is abbrevi-ated as t.n.p. (t.n.). These matrices can be consideredas a generalization of the partially negative matrices,that is, matrices with all its principal minors negative.The partially negative matrices are called N-matrices ineconomic models. If, instead, all minors of a matrix arenonnegative (positive) the matrix is called totally posi-tive(strictly totally positive) and it is abbreviated as TP(STP).

The t.n.p. matrices with a negative (1, 1) entry havebeen characterized using the factors of their LDU fac-torization. For nonsingular t.n.p. matrices with the(1, 1) entry equal to zero, a characterization is given interms of the sign of some minors.

Now, we characterize the t.n.p. matrices with the (1, 1)entry equal to zero in terms of a LDU full rank factor-ization, where L is a block lower echelon matrix, U is aunit upper echelon TP matrix and D is a diagonal ma-trix. This result holds for square t.n.p. matrices whenthe (n, n) entry is equal to zero or when it is negativebut we do not use permutation similarity. Finally, someproperties of this kind of matrices are obtained.

qd-type methods for totally nonnegative qua-siseparable matrices

Roberto Bevilacqua, Enrico Bozzo, Gianna Del Corso

We present an implicit shifted LR method for the classof quasiseparable matrices representable in terms of theparameters involved in their Neville factorization. Themethod resembles the qd method for tridiagonal ma-trices, and has a linear cost per step. For totally non-negative quasiseparable matrices, it is possible to showthat the algorithm is stable and breakdown cannot oc-cur when the Laguerre shift, or another strategy pre-serving nonnegativity, is used.

59

Page 62: CMR17 - ILAS 2013

Accurate and fast algorithms for some totallynonnegative matrices arising in CAGD

Jorge Delgado

Gasca and Pena showed that nonsingular totally non-negative (TNN) matrices and its inverses admit a bidi-agonal decomposition. Koev, in a recent work, assum-ing that the bidiagonal decompositions of a TNN matrixand its inverse are known with high relative accuracy(HRA), presented algorithms for performing some alge-braic computations with high relative accuracy: com-puting the eigenvalues and the singular values of theTNN matrix, computing the inverse of the TNN matrixand obtaining the solution of some linear systems whosecoefficient matrix is the TNN matrix.

Rational Bernstein and Said-Ball bases are usual repre-sentations in Computer Aided Geometric Design (CAGD).In some applications in this field it is necessary to solvesome of the algebraic problems mentioned before for thecollocations matrices of those bases (RBV and RSBVmatrices, respectively). In our talk we will show, takinginto account that RBV and RSBV matrices are TNN,how to compute the bidiagonal decomposition of theRBV and RSBV matrices with HRA. Then we will ap-ply Koevs algorithms showing the accuracy of the ob-tained results for the considered algebraic problems.

The Totally Positive (Nonnegative) CompletionProblem and Other Recent Work

Charles Johnson

We survey work on the TP completion problem for”connected” patterns.Time permitting, we also mention some recent workon perturbation of TN to TP. This relates the possi-ble numbers and arrangements of of equal entries in aTP matrix to possible incidences of points and lines inthe plane, a most exciting advance.

Generalized total positivity and eventual prop-erties of matrices

Olga Kushel

Here we study the following classes of matrices: even-tually totally positive matrices, Kotelyansky matricesand eventually P-matrices. We analyze the relationsbetween these three classes. We also discuss commonspectral properties of matrices from these classes. Theobtained results are applied to some spectral problems.

Deleting derivations algorithm and TNN matri-ces

Stephane Launois

In this talk, I will present the deleting derivations al-gorithm, which was first developed in the context ofquantum algebras, and explain the significance of thisalgorithm for the study of TNN matrices.

More accurate polynomial least squares fittingby using the Bernstein basis

Ana Marco, Jose-Javier Martınez

The problem of least squares fitting with algebraic poly-nomials of degree less than or equal to n, a problem withapplications to approximation theory and to linear re-gression in statistics, is considered.

The coefficient matrix A of the overdetermined linearsystem to be solved in the least squares sense is a rectan-gular Vandermonde matrix if the monomial basis is usedfor the space of polynomials or a rectangular Bernstein-Vandermonde matrix if the Bernstein basis is used. Un-der certain conditions both classes of matrices are to-tally positive, but they usually are ill-conditioned ma-trices (a problem related to the collinearity subject instatistical regression), which leads to a loss of accuracyin the solution.

Making use of algorithms for the QR factorization of to-tally positive matrices the problem is more accuratelysolved, indicating how to use the Bernstein basis for ageneral problem and taking advantage of the fact thatthe condition number of a Bernstein-Vandermonde ma-trix is smaller than the condition number of the corre-sponding Vandermonde matrix.

Matrices over Idempotent Semirings

Tolerance and strong tolerance interval eigen-vectors in max-min algebra

Martin Gavalec, Jan Plavka

In most applications, the vector and matrix inputs arenot exact numbers and they can rather be consideredas values in some interval. The classification and theproperties of matrices and their eigenvectors, both withinterval coefficients in max-min algebra, are studied inthis contribution. The main result is the complete de-scription of the eigenspace structure for the toleranceand the strong tolerance interval eigenproblem in max-min algebra. Polynomial algorithms are also suggestedfor solving the recognition version of both problems.Finally, the connection between the two interval eigen-problems is described and illustrated by numerical ex-amples. The work has been prepared under the supportof the Czech Science Foundation project 402/09/0405and APVV 0008-10.

The robustness of interval matrices in max-minalgebra

Jan Plavka, Martin Gavalec

The robustness of interval fuzzy matrices is defined asa sequence matrix powers and matrix-vector multiplica-tion performed in max-min arithmetics ultimately whichstabilizes for all matrices or at least for one matrix froma given interval. Robust interval matrices in max-min

60

Page 63: CMR17 - ILAS 2013

algebra and max-plus algebra describing discrete-eventdynamic systems are studied and equivalent conditionsfor interval matrices to be robust are presented. Poly-nomial algorithms for checking the necessary and suffi-cient conditions for robustness of interval matrices areintroduced. In addition, more efficient algorithms forverifying the robustness of interval circulant and inter-val zero-one fuzzy matrices are described. The workhas been prepared under the support of the Czech Sci-ence Foundation project 402/09/0405 and the Researchand Development Operational Program funded by theERDF (project number: 26220120030).

Bounds of Wielandt and Schwartz for max-algebraicmatrix powers

Hans Schneider, Sergey Sergeev

By the well-known bound of Wielandt, the sequence ofpowers of primitive Boolean matrices becomes periodicafter (n − 1)2 + 1, where n is the size of the matrixor, equivalently, the number of nodes in the associatedgraph. In the imprimitive case, there is a sharper bounddue to Schwartz making use of the cyclicity of the graph.We extend these bounds to the periodicity of columnsand rows of max-algebraic powers with indices in thecritical graph. This is our joint work with G. Merletand T. Nowak.

Gaussian elimination and other universal algo-rithms

Sergei Sergeev

Linear algebra over general semirings is a known area ofpure mathematics. Due to applications of idempotentsemirings in optimization, timetable design and otherfields we are interested in building a more algorithmicand applied theory on the same level of abstraction. Wewill present some universal algorithms solving Z-matrixequations over semirings (based on joint work with G.L.Litvinov, V.P. Maslov and A.N. Sobolevskii) and dis-cuss a semiring generalization of the Markov chain treetheorem (based on joint work with B.B. Gursoy, S. Kirk-land and O. Mason).

Matrix Inequalities

Tsallis entropies and matrix trace inequalities inQuantum Statistical Mechanics

Natalia Bebiano

Extensions of trace inequalities arising in Statistical Me-chanics are derived in a straightforward and unified wayfrom a variational characterization of the Tsallis en-tropy. Namely, one-parameter extension of the thermo-dynamic inequality is presented and its equivalence toa generalized Peierls-Bogoliubov inequality is stated, inthe sense that one may be obtained from the other, and

vice-versa.

An order-like relation induced by the Jensen in-equality

Tomohiro Hayashi

In this talk I will consider a certain order-like relationfor positive operators on a Hilbert space. This relationis defined by using the Jensen inequality with respect tothe square-root function. I will explain some propertiesof this relation.

Monomial Inequalities for p-Newton Matricesand Beyond

Charles Johnson

An ordered sequence of real numbers is Newton if thesquare of each element is at least the product of its twoimmediate neighbors, and p-Newton if, further, the se-quence is positive. A real matrix is Newton (p-Newton)if the sequence of averages of the k-by-k principal minorsis Newton (p-Newton). Isaac Newton is credited withthe fact symmetric matrices are Newton. Of course, anumber of determinantal inequalities hold for p-Newtonmatrices. We are interested in exponential polynomialinequalities that hold for all p-Newton sequences. Fortwo monomials M1 and M2 in n variables, we determinewhich pairs satisfy M1 at least M2 for all p-Newton se-quences. Beyond this, we discuss what is known aboutinequalities between linear combinations of monomials.These tend to be characterized by highly combinatorialstatements about the indices in the monomials.

A numerical radius inequality involving the gen-eralized Aluthge transform

Fuad Kittaneh, Amer Abu Omar

A spectral radius inequality is given. An applicationof this inequality to prove a numerical radius inequal-ity that involves the generalized Aluthge transform isalso provided. Our results improve earlier results byKittaneh and Yamazaki.

Fischer type determinantal inequalities for accretive-dissipative matrices

Minghua Lin

The well known Fischer determinantal inequality forpositive definite matrix says that the determinant ofa partitioned matrix is bounded by the product of thedeterminant of its diagonal blocks. We investigate thistype of inequality when the underlying matrix is accretive-dissipative, that is, the matrix whose real and imaginarypart (in the Cartesian decomposition) are both positivedefinite.

61

Page 64: CMR17 - ILAS 2013

Two Conjectured Inequalities motivated by Quan-tum Structures.

Rajesh Pereira, Corey O’Meara

We show how some mathematical tools used in Quan-tum Theory give rise to some natural questions aboutinequalities. We look at interrelationships between cer-tain subsets of completely positive linear maps and theirapplications to results on correlation matrices, Schurproducts and Grothendieck’s inequalities. We also showinterconnections between density matrices and the per-manent on top conjecture. These connections are usedto formulate a Quantum permanent on top conjecturewhich generalizes the original conjecture.

Kwong matrices and related topics

Takashi Sano

In this talk, after reviewing our study on Loewner andKwong (or anti-Loewner) matrices, we will show ourrecent results on them or generalized ones.

Characterizations of some distinguished subclassesof normal operators by operator inequalities

Ameur Seddik

In this note, we shall present some characterizations ofsome distinguished subclasses of normal operators (self-adjoint operators, unitary operators, unitary reflectionoperators, normal operators) by operator inequalities.

On Ky Fan’s result on eigenvalues and real sin-gular values of a matrix

Tin-Yau Tam, Wen Yan

Ky Fan’s result states that the real parts of the eigen-values of an n × n complex matrix A is majorized bythe real singular values of A. The converse was es-tablished independently by Amir-Moez and Horn, andMirsky. We extend the results in the context of com-plex semisimple Lie algebras. The real semisimple caseis also discussed. The complex skew symmetric case andthe symplectic case are explicitly worked out in termsof inequalities. The symplectic case and the odd di-mensional skew symmetric case can be stated in termsof weak majorization. The even dimensional skew sym-metric case involves Pfaffian.

On some matrix inequalities involving matrix ge-ometric mean of several matrices

Takeaki Yamazaki

In the recent years, the study of geometric means of sev-eral matrices is developing very much. Especially, theone of the geometric means is called the Karcher mean.It has many good properties. In this talk, we shall in-troduce some matrix inequalities which are relating to

the Karcher mean. Results are extensions of knownresults, for example, Furuta inequality, Ando-Hiai in-equality and famous characterization of chaotic order.Recently, the Karcher mean is considered as a limit ofpower mean defined by Lim-Palfia. We will also intro-duce some matrix inequalities related to power mean.

Some Inequalities involving Majorization

Fuzhen Zhang

We start with the discussions on angles of vectors andapply majorization to obtain some inequalities of vec-tor angles of majorization type. We then study themajorization polytope of integral vectors. We presentseveral properties of the cardinality function of the poly-tope. First part is a joint work with D. Castano andV.E. Paksoy (Nova Southeastern University, Florida,USA), and the second part is with G. Dahl, (Univer-sity of Oslo, Norway).

Matrix Methods for Polynomial Root-Finding

A Factorization of the Inverse of the ShiftedCompanion Matrix

Jared Aurentz

A common method for computing the zeros of an nth

degree polynomial is to compute the eigenvalues of itscompanion matrix C. A method for factoring the shiftedcompanion matrix C − ρI is presented. The factoriza-tion is a product of 2n − 1 essentially 2 × 2 matricesand requires O(n) storage. Once the factorization iscomputed it immediately yields factorizations of both(C − ρI)−1 and (C − ρI)∗. The cost of multiplying avector by (C−ρI)−1 is shown to be O(n) and thereforeany shift and invert eigenvalue method can be imple-mented efficiently. This factorization is generalized toarbitrary forms of the companion matrix.

A condensed representation of generalized com-panion matrices

Roberto Bevilacqua, Gianna Del Corso, Luca Gemignani

In this talk we give necessary and sufficient conditionfor a matrix such that AH = p(A)+C to be reduced toblock tridiagonal form, where the size of the blocks de-pends on the rank of the correction matrix C. We showhow to unitary reduce the matrix to this condensed formapplying the block-Lanczos process to a suitable ma-trix starting form a basis of the space spanned by thecolumns of the matrix CA − AC. Among this class ofmatrices, that we called almost normal, we find com-panion matrices, comrade and colleagues matrices.

Multiplication of matrices via quasiseparable gen-erators and eigenvalue iterations for block trian-

62

Page 65: CMR17 - ILAS 2013

gular matrices

Yuli Eidelman

We study special multiplication algorithms for matri-ces represented in the quasiseparable form. Such algo-rithms appear in a natural way in the design of iterationeigenvalue solvers for structured matrices. We discuss ageneral method which may be used as a main buildingblock in the design of fast LR-type methods for upperHessenberg or block triangular matrices. The compar-ison with other existing methods is performed and theresults of numerical tests are presented.

The Ehrlich-Aberth method for structured eigen-value refinement

Luca Gemignani

This talk is concerned with the computation of a struc-tured approximation of the spectrum of a structuredeigenvalue problem by means of a root-finding method.The approach discussed here consists in finding the struc-tured approximation by using the Ehrlich-Aberth al-gorithm applied for the refinement of an unstructuredapproximation providing a set of initial guesses. Appli-cations to T-palindromic, H-palindromic and even/oddeigenvalue problems will be presented. This work ispartly joint with S. Brugiapaglia and V. Noferini.

Generalized Hurwitz matrices and forbidden sec-tors of the complex plane

Olga Holtz, Sergey Khrushchev, Olga Kushel, MikhailTyaglov

We investigate polynomials whose generalized Hurwitzmatrices (to be defined in the talk) are totally non-negative. Using a generalized Euclidean algorithm, weestablish that their zeros avoid specific sectors in thecomplex plane. Multiple root interlacing turns out to beultimately related to this phenomenon as well, althoughthat connection is not yet completely understood. Fi-nally, we suggest some generalizations and open prob-lems.

A generalized companion method to solve sys-tems of polynomials

Matthias Humet, Marc Van Barel

A common method to compute the roots of a univariatepolynomial is to solve for the eigenvalues of its compan-ion matrix. If the polynomial is given in a Chebyshevbasis, one refers to this matrix as the colleague ma-trix. By generalizing this procedure for multivariatepolynomials, we are able to solve a system of n-variatepolynomials by means of an eigenvalue problem. Weobtain an efficient method that yields numerically ac-curate solutions for well-behaved polynomial systems.An advantage of the method is that it can work withany polynomial basis.

Real and Complex Polynomial Root-finding byMeans of Eigen-solving

Victor Pan

Our new numerical algorithms approximate real andcomplex roots of a univariate polynomial lying near aselected point of the complex plane, all its real roots,and all its roots lying in a fixed half-plane or in a fixedrectangular region. The algorithms seek the roots of apolynomial as the eigenvalues of the associated compan-ion matrix. Our analysis and experiments show theirefficiency. We employ some advanced machinery avail-able for matrix eigen-solving, exploit the structure ofthe companion matrix, and apply randomized matrix al-gorithms, repeated squaring, matrix sign iteration andsubdivision of the complex plane. Some of our tech-niques can be of independent interest.

Solving secular and polynomial equations: a mul-tiprecision algorithm

Dario Bini, Leonardo Robol

We present a numerical algorithm to approximate allthe solutions of the secular equation

∑ni=1

aix−bi − 1 = 0

with any given number of guaranteed digits. Two ver-sions are provided. The first relies on the MPSolveapproach of D. Bini and G. Fiorentino [Numer. Al-gorithms 23, 2000], the second on the approach of S.Fortune [J. of Symbolic Computation 33, 2002] for com-puting polynomial roots. Both versions are based on arigorous backward error analysis and on the conceptof root neighborhood combined with Gerschgorin the-orems, where the properties of diagonal plus rank-onematrices are exploited.

An implementation based on the GNU multiprecisionpackage (GMP) is provided which is included in the re-lease 3.0 of the package MPSolve. It is downloadablefrom http://riccati.dm.unipi.it/mpsolve. The newrelease computes both the solutions of a secular equa-tion and the roots of a polynomial assigned in termsof its coefficients. From the many numerical experi-ments, it turns out that the new release is generallymuch faster than MPSolve 2.0 and of the package Eigen-solve of S. Fortune (http://ect.bell-labs.com/who/sjf/eigensolve.html). For certain polynomials, likethe Mandelbrot or the partition polynomials, the accel-eration is dramatic.

Comparison with techniques based on rank-structuredmatrices are performed. The extensions to computingeigenvalues of matrix polynomials and roots of scalarpolynomials represented in different basis are discussed.

The Fitting of Convolutional Models for Com-plex Polynomial Root Refinement

Peter Strobach

In the technical sciences, there exists a strong demandfor high-precision root finding of univariate polynomialswith real or complex coefficients. One can distinguish

63

Page 66: CMR17 - ILAS 2013

between cases of low-degree polynomials with high coef-ficient dynamics (typically “Wilkinson-like” polynomi-als) and high-degree low coefficient variance polynomi-als like random coefficient polynomials. The latter pro-totype case is typically found in signal and array pro-cessing, for instance, in the analysis of long z-transformpolynomials of signals buried in noise, speech process-ing, or in finding the roots of z-transfer functions ofhigh-degree FIR filters. In these areas, the polyno-mial degrees n can be excessively high ranging up ton = 10000 or higher. Multiprecision software is practi-cally not applicable in these cases, because of the O(n2)complexity of the problem and the resulting excessiveruntimes. Established double precision root-finders aresufficiently fast but too inaccurate.

We demonstrate that the problem can be solvedin standard double precision arithmetic by a two-stageprocedure consisting of a fast standard double preci-sion root finder for computing “coarse” initial root es-timates, followed by a noniterative root refinement al-gorithm based on the fitting of a convolutional modelonto the given “clean” coefficient sequence. We exploitthe fact that any degree n coefficient sequence can beposed as the convolution of a linear factor (degree 1)polynomial (effectively representing a root) and a cor-responding degree n − 1 subpolynomial coefficient se-quence representing the remaining roots. Once a coarseestimate of a root is known, the corresponding degreen−1 subpolynomial coefficient sequence is simply com-puted as the solution of a least squares problem. Nowboth the linear factor and the degree n− 1 subpolyno-mial coefficient sequences are approximately known. Us-ing this information as input, we can finally fit both thelinear factor and the degree n− 1 subpolynomial coeffi-cients jointly onto the given “clean” degree n coefficientsequence, which is a nonlinear approximation problem.This concept amounts the evaluation of a sparse Jaco-bian matrix. When solved appropriately, this resultsin a complex orthogonal filter type algorithm for re-computing a given root estimate with substantially en-hanced numerical accuracy.

Simulation results are shown for several types ofpolynomials which typically occur in signal and arrayprocessing. For instance, random coefficient polynomi-als up to a degree of n = 64000, where we reduce theabsolute root errors of a standard double precision rootfinder by a factor of approximately 1000 (or 3 additionalaccurate mantissa digits in standard double precisionarithmetic), or the roots of a complex chirp polynomialof degree n = 2000, where we reduced the absolute rooterrors by a factor of approximately 10000 using the newmethod.

Companion pencil vs. Companion matrix: Canwe ameliorate the accuracy when computing roots?

Raf Vandebril

The search for accurate and reliable methods for fastlyretrieving roots of polynomials based on matrix eigen-value computations has not yet ended. After the firstattempts based on suitable modifications of the QR al-gorithm applied on companion matrices trying to keep

possession of the structure as much as possible, we haveseen a recent shift of attention towards nonunitary LRalgorithms and pencils.

In this lecture we will investigate the possibilities ofdeveloping reliable QZ algorithms for rootfinding. Wethus step away from the classical companion setting andturn to the companion pencil approach. This createsmore flexibility in the distribution of the polynomialcoefficients and allows us an enhanced balancing of theassociated pencil. Of course we pay a price: computingtime and memory usage. We will numerically comparethis novel QZ algorithm with a former QR algorithmin terms of accuracy, balancing sensitivity, speed andmemory consumption.

Fast computation of eigenvalues of companion,comrade, and related matrices

Jared Aurentz, Raf Vandebril, David Watkins

The usual method for computing the zeros of a poly-nomial is to form the companion matrix and computeits eigenvalues. In recent years several methods that dothis computation in O(n2) time with O(n) memory byexploiting the structure of the companion matrix havebeen proposed. We propose a new class of methods ofthis type that utilizes a factorization of the comrade(or similar) matrix into a product of essentially 2 × 2matrices times a banded upper-triangular matrix. Ouralgorithm is a non-unitary variant of Francis’s implic-itly shifted QR algorithm that preserves this factoredform. We will present numerical results and compareour method with other methods.

Matrix Methods in Computational Systems Bi-ology and Medicine

Two-Fold Irreducible Matrices: A New Com-binatorial Class with Applications to RandomMultiplicative Growth

Lee Altenberg

Two-Fold Irreducible (TFI) matrices are a new combi-natorial class intermediate between primitive and fullyindecomposable matrices. They are the necessary andsufficient class for strict log-convexity of the spectralradius r(eDA) with respect to nonscalar variation in di-agonal matrix D. Matrix A is TFI when A or A2 is irre-ducible and A⊤A or AA⊤ is irreducible, in which case A,A2, A⊤A, and AA⊤ are all irreducible, and equivalently,the directed graph and simple bipartite graph associ-ated with A are each connected. Thus TFI matrices arethe intersection of irreducible and chainable matrices.A Markov chain is TFI if and only if it is ergodic andrunning the process alternately forward and backwardis also ergodic. The problem was motivated by a ques-tion from Joel E. Cohen who discovered that a stochas-tic process N(t) = st . . . s2s1N(0) approaches Taylor’slaw in the limit of large t: limt→∞[log Var(N(t)) −b log E(N(t))] = log a, for some a > 0 and b, when D

64

Page 67: CMR17 - ILAS 2013

is nonscalar, the spectral radius of DP is not 1, andthe transition matrix P for the finite Markov chain(s1, s2, s3, ...) meets conditions needing to be charac-terized. The requisite condition is that P be two-foldirreducible.

Understanding real-world systems using random-walk-based approaches for analyzing networks

Natasa Durdevac, Marco Sarich, Sharon Bruckner, TimConrad, Christof Schuette

Real-world systems are often modeled as networks. Manyalgorithms for analyzing such complex networks are ori-ented towards (1) finding modules (or clusters, commu-nities) that are densely inter-connected substructureshaving sparse connections to the rest of the network,and (2) finding hubs that are key connectors of modules.In many cases modules and hubs correspond to relevantstructures in the original system, so their identificationis important for understanding the organization of theunderlying real-world system. For example in biologicalsystems modules often correspond to functional units(like protein complexes) and hubs to essential parts ofthis system (like essential proteins).

In this talk, we will present a novel spectral-based ap-proach for efficient fuzzy network partitioning into mod-ules. We will consider random walk processes on undi-rected networks, for which modules will correspond tometastable sets: regions of the network where a randomwalker stays for a long time before exiting. In contrastto standard spectral clustering approaches, we will de-fine different transition rules of the random walk, whichwill lead to much more prominent gaps in the spectrumof the adapted random walk process. This allows easieridentification of network’s modular structures and re-solves the problem of alternating, cyclic structures iden-tified as modules, improving upon previous methods.

The mathematical framework that will be presented inthis talk includes solving several interesting linear alge-bra problems.

Stochastic Dimension Reduction Via Matrix Fac-torization in Genomics and Proteomics

Amir Niknejad

Since the last decade the matrix factorization and di-mension reduction methods have been used for min-ing in genomics and proteomics. In this talk we coverthe basics of DNA Microarrays technology and dimen-sion reduction techniques for analysis of gene expres-sion data. Also, we discuss how biological informationin gene expression matrix could be extracted, using asampling framework along with matrix factorization in-corporating rank approximation of gene matrix.

Improved molecular simulation strategies usingmatrix analysis

Marcus Weber, Konstantin Fackeldey

According to the state of the art, complex molecularconformation dynamics can be expressed by MarkovState Models. The construction of these models usu-ally needs a lot of short time trajetory simulations. Inthis talk, methods based on linear algebra and matrixanalysis are presented which help to reduce the compu-tational costs of molecular simulation.

Multilinear Algebra and Tensor Decompositions

Between linear and nonlinear: numerical com-putation of tensor decompositions

Lieven De Lathauwer, Laurent Sorber, Marc Van Barel

In numerical multilinear algebra important progress hasrecently been made. It has been recognized that ten-sor product structure allows a very efficient storage andhandling of the Jacobian and (approximate) Hessian ofthe cost function. On the other hand, multilinearityallows global optimization in (scaled) line and planesearch. Although there are many possibilities for de-composition symmetry and factor structure, these maybe conveniently handled. We demonstrate the algo-rithms using Tensorlab, our recently published MAT-LAB toolbox for tensors and tensor computations.

Alternating minimal energy methods for linearsystems in higher dimensions. Part II: Fasteralgorithm and application to nonsymmetric sys-tems

Sergey Dolgov, Dmitry Savostyanov

In this talk we further develop and investigate the rank-adaptive alternating methods for high-dimensional tensor-structured linear systems. We introduce a recurrentvariant of the ALS method equipped with the basis en-richment, which performs a subsequent linear systemreduction, and seems to be more accurate in some casesthan the method based on a global steepest descent cor-rection. In addition, we propose certain heuristics toreduce the complexity, and discuss how the method canbe applied to nonsymmetric systems. The efficiency ofthe technique is demonstrated on several examples ofthe Fokker-Planck and chemical master equations.

Alternating minimal energy methods for linearsystems in higher dimensions. Part I: SPD sys-tems.

Sergey Dolgov, Dmitry Savostyanov

We introduce a family of numerical algorithms for thesolution of linear system in higher dimensions with thematrix and right hand side given and the solution soughtin the tensor train format. The proposed methods arerank–adaptive and follow the alternating directions frame-work, but in contrast to ALS methods, in each iteration

65

Page 68: CMR17 - ILAS 2013

a tensor subspace is enlarged by a set of vectors chosensimilarly to the steepest descent algorithm. The con-vergence is analyzed in the presence of approximationerrors and the geometrical convergence rate is estimatedand related to the one of the steepest descent. The com-plexity of the presented algorithms is linear in the modesize and dimension and the convergence demonstratedin the numerical experiments is comparable to the oneof the DMRG–type algorithm. Numerical comparisonwith conventional tensor methods is provided.

The number of singular vector tuples and theuniqueness of best (r1, . . . , rd) approximation oftensors

Shmuel Friedland

In the first part of the talk we give the exact num-ber of singular tuples of a complex d-mode tensor. Inthe second part of the talk we show that for a genericreal d-mode tensor the best (r1, . . . , rd) approximationis unique. Moreover for a generic symmetric d-modetensor the best rank one approximation is unique, hencesymmetric. Similar results hold for partially symmetrictensors. The talk is based on a recent joint paper withGiorgio Ottaviani and a recent paper of the speaker.

From tensors to wavelets and back

Vladimir Kazeev

The Tensor Train (TT) decomposition was introducedrecently for the low-parametric representation of high-dimensional tensors. Among the tensor decompositionsbased on the separation of variables, the TT representa-tion is remarkable due to the following two key features.First, it relies on the successive low-rank factorization(exact or approximate) of certain matrices related tothe tensor to be represented. As a result, the low-rankapproximation crucial for basic operations of linear alge-bra can be implemented in the TT format with the useof standard matrix algorithms, such as SVD and QR.Second, in many practical cases the particular schemeof separation of variables, which underlies the TT rep-resentation, does ensure linear (or almost linear) scalingof the memory requirements and of the complexity withrespect to the number of dimensions.

Each dimension of a tensor can be split into a few vir-tual levels representing different scales within the di-mension. Such a transformation, referred to as quanti-zation, results in a tensor with more dimensions, smallermode sizes and the same entries. The TT decomposi-tion, being applied to a tensor after quantization, iscalled Quantized Tensor Train (QTT) decomposition.By attempting to separate all virtual dimensions andapproximate the hierarchy of the scales, the QTT rep-resentation seeks more structure in tensors, comparedto the TT format without quantization.

Consider the approximation of a given tensor in the TTformat. Even when the low-rank matrix factorizationmentioned above is performed crudely, i.e. with too

small ranks, the factors carry a certain description ofthe tensor being approximated. The most essential TT“components”, which are represented by those factors,can be filtered out from other tensors with the hopethat the images are going to be sparse. Oseledets andTyrtyshnikov proposed this approach under the nameWavelet Tensor Train (WTT) transform and showedthe transform to be linear and orthogonal. Having beenconstructed with the use of quantization, it extractsthe hierarchy of virtual levels in terms of that of thereference tensor and can be seen as an algebraic wavelet-type transform.

Our contribution consists in closing the “tensor-waveletloop” by analyzing the tensor structure of the WTTtransform. We give an explicit TT decomposition ofits matrix in terms of that of the reference tensor. Inparticular, we establish a relation between the TT ranksof the two. Also, for a tensor given in the TT formatwe construct an explicit TT representation of its WTTimage and bound the TT ranks of the latter. Finally, weshow that the matrix of the WTT transform is sparse.

In short, the matrix of the WTT transform turns outto be a sparse, orthogonal matrix with low-rank TTstructure, adapted in a certain way to a given vector.

This is a joint work with Ivan Oseledets (INM RAS,Moscow).

Tensor-based Regularization for Multienergy X-ray CT Reconstruction

Oguz Semerci, Ning Hao, Misha Kilmer, Eric Miller

The development of energy selective, photon countingX-ray detectors makes possible a wide range of possi-bilities in the area of computed tomographic image for-mation. Under the assumption of perfect energy resolu-tion, we propose a tensor-based iterative algorithm thatsimultaneously reconstructs the X-ray attenuation dis-tribution for each energy. We model the multi-spectralunknown as a 3-way tensor where the first two dimen-sions are space and the 3rd dimension is energy. Wepropose a tensor nuclear norm regularizer based on thetensor-SVD of Kilmer and Martin (2011) which is a con-vex function of the multi-spectral unknown. Simulationresults show that the tensor nuclear norm can be used asa stand alone regularization technique for the energy se-lective (spectral) computed tomography (CT) problemand can be combined with additional spatial regulariza-tion for high quality image reconstructions.

Multiplying higher-order tensors

Lek-Heng Lim, Ke Ye

Two n×n matrices may be multiplied to obtain a thirdn×n matrix in the usual way. Is there a multiplicationrule that allows one to multiply two n × n × n hyper-matrices to obtain a third n × n × n hypermatrix? Itdepends on what we mean by multiplication. Matrixmultiplication comes from composition of linear opera-

66

Page 69: CMR17 - ILAS 2013

tors and is a coordinate independent operation. Linearoperators are of course 2-tensors and matrices are theircoordinate representations. If we require our multipli-cation of n×n×n-hypermatrices to come from multipli-cation of 3-tensors, then we show that the answer is no.In fact, we will see that two d-tensors can be multipliedto yield a third d-tensor if and only if d is even, and inwhich case, the multiplication rule is essentially givenby tensor contractions.

Wrestling with Alternating Least Squares

Martin Mohlenkamp

Alternating Least Squares is the simplest algorithm forapproximating by a sum of rank-one tensors. Yet evenit, even on small problems, has (unpleasantly) rich be-havior. I will describe my efforts to understand what itis doing and why.

Tensor decompositions and optimization prob-lems

Eugene Tyrtyshnikov

Tensor decompositions, especially Tensor Train (TT)and Hierarchical Tucker (HT), are fastly becoming use-ful and widely used computational instruments in nu-merical analysis and numerous applications. As soonas the input vectors are presented in the TT (HT) for-mat, basic algebraic operations can be efficiently imple-mented in the same format, at least in a good lot ofpractical problems. A crucial thing is, however, to ac-quire the input vectors in this format. In many casesthis can be accomplished via the TT-CROSS algorithm,which is a far-reaching extension of the matrix cross in-terpolation algorithms. We discuss some properties ofthe TT-CROSS that allow us to adopt it for the needsof solving a global optimization problem. After that,we present a new global optimization method based onspecial transformations of the scoring functional andTT decompositions of multi-index arrays of values ofthe scoring functional.

We show how this new method works in the direct dock-ing problem, which is a problem of accommodating aligand molecule into a larger target protein so that theinteraction energy is minimized. The degrees of freedomin this problem amount to several tens. Most popu-lar techniques are genetic algorithms, Monte Carlo andmolecular dynamic approach. We have found that thenew method can be up to one hundred times faster ontypical protein-ligand complexes.

On convergence of ALS and MBI for approxi-mation in the TT format

Andre Uschmajew

We consider the alternating least squares algorithm (ALS)and the recently proposed maximum block improve-ment method (MBI, SIAM J. Optim., 22 (2012), pp.

87–107) for optimization with TT tensors. For both, arank condition on the Hessian of the cost function canbe imposed to prove local convergence. We review theseconditions and derive singular value gap conditions forthe approximated tensor which ensure them. Addition-ally, the MBI method has the advantage of producingglobally convergent subsequences to critical points un-der the condition of boundedness. Both methods shallbe compared in their numerical performance.

Properties of the Higher-Order Generalized Sin-gular Value Decomposition

Charles Van Loan, Orly Alter

The higher-order SVD is a way of simultaneously reduc-ing each matrix in a collection A1, , AN to an SVD-like form that permits one to identify common features.Each Ai has the same number of columns. The invari-ant subspace associated with the minimum eigenvalueof the matrix (S1+· · ·+Sn)(S−1

1 +· · ·+S−1N ) is involved

where Si = ATi Ai. We are able to compute this sub-space stably by using a higher-order CS decomposition,a new reduction that we shall present in the talk. Thebehavior of the overall procedure when various error tol-erances are invoked will be discussed as will connectionsto PARAFAC2.

A Preconditioned Nonlinear Conjugate Gradi-ent Algorithm for Canonical Tensor Decomposi-tion

Manda Winlaw, Hans De Sterck

The alternating least squares (ALS) algorithm is cur-rently the workhorse algorithm for computing the canon-ical tensor decompostion. We propose a new algorithmthat uses ALS as a nonlinear preconditioner for the non-linear conjugate gradient (NCG) algorithm. Alterna-tively, we can view the NCG algorithm as a way toaccelerate the ALS iterates. To investigate the perfor-mance of our resulting preconditioned nonlinear conju-gate gradient (PC-NCG) algorithm we perform numer-ical tests using our algorithm to factorize a number ofdifferent tensors.

Nonlinear Eigenvalue Problems

A Pade Approximate Linearization Techniquefor Solving the Quadratic Eigenvalue Problemwith Low-Rank Damping

Xin Huang, Ding Lu, Yangfeng Su, Zhaojun Bai

The low-rank damping term appears commonly in thequadratic eigenvalue problem arising from real physi-cal simulations. To exploit this low-rank property, wepropose an Approximate Linearization technique viapadeapproximants, PAL in short. One of advantagesof the PAL technique is that the dimension of the re-sulting linear eigenvalue problem is only n+ rm, which

67

Page 70: CMR17 - ILAS 2013

is generally substantial smaller than the dimension 2nof the linear eigenvalue problem derived by an exactlinearization scheme, where n is the dimension of theoriginal QEP, r and m are the rank of the dampingmatrix and the order of padeapproximant, respectively.In addition, we propose a scaling strategy aiming atthe minimization of the backward error of the PAL ap-proach.

Distance problems for Hermitian matrix poly-nomials

Shreemayee Bora, Ravi Srivastava

Hermitian matrix polynomials with hyperbolic, quasi-hyperbolic or definite structure arise in a number of ap-plications in science and engineering. A distinguishingfeature of such polynomials is that their eigenvalues arereal with the associated sign characteristic being eitherpositive or negative. We propose a definition of thesign characteristic of a real eigenvalue of a Hermitianpolynomial which is based on the homogeneous form ofthe polynomial and use it to analyze the distance from agiven member of each of these classes to a nearest mem-ber outside the class with respect to a specified norm.The analysis is based on Hermitian ǫ-pseudospectra andresults in algorithms based on the bisection methodto compute these distances. The main computationalwork in each step of bisection involves finding either thelargest eigenvalue/eigenvalues of a positive definite ma-trix or the smallest eigenvalue/eigenvalues of a negativedefinite matrix along with corresponding eigenvectorsand the algorithms work for all the above mentionedstructures except for certain classes of definite polyno-mials.

Spectral equivalence of Matrix Polynomials, theIndex Sum Theorem and consequences

Fernando De Teran, Froilan Dopico, Steve Mackey

In this talk we introduce the spectral equivalence rela-tion on matrix polynomials (both square and rectangu-lar). We show that the invariants of this equivalencerelation are: (a) The dimension of the left and rightnull spaces, (b) the finite elementary divisors, and (c)the infinite elementary divisors. We also compare thisrelation with other equivalence relations in the litera-ture [1,2]. We analyze what happens with the remain-ing part of the “eigenstructure”, namely, the left andright minimal indices, of two spectrally equivalent ma-trix polynomials. Particular cases of spectrally equiva-lent polynomials of a given one are linearizations (poly-nomials of degree one) and, more in general, ℓ-ifications(polynomials of degree ℓ). We analyze the role playedby the degree in the spectral equivalence. This leadsus to the Index Sum Theorem, an elementary relationbetween the sum of the finite and infinite structural in-dices (degrees of elementary divisors), the sum of allminimal indices, the degree, and the rank of any matrixpolynomial. Though elementary in both the statementand proof, the Index Sum Theorem has several powerfulconsequences. We show some of them, mostly related

to structured ℓ-ifications, in the last part of the talk.

[1] N. Karampetakis, S. Vologiannidis. Infinite elemen-tary divisor structure-preserving transformations for poly-nomial matrices. Int. J. Appl. Math. Comput. Sci., 13(2003) 493–503.

[2] A. C. Pugh, A. K. Shelton. On a new definition ofstrict system equivalence. Int. J. Control, 27 (1978)657–672.

Block iterations for eigenvector nonlinearities

Elias Jarlebring

Let A(V ) ∈ Rn×n be a symmetric matrix depending onV ∈ Rn×k which is a basis of a vector space and suchthat A(V ) is independent of the choice of basis of thevector space. We here consider the problem of comput-ing V such that (Λ, V ) is an invariant pair of the matrixA(V ), i.e., A(V )V = V Λ. We present a block algorithmfor this problem, where every step involves solving oneor several linear systems of equations. The algorithm isa generalization of (shift-and-invert) simultaneous iter-ation for the standard eigenvalue problem and we showthat the generalization inherits many of its properties.The algorithm is illustrated with the application to amodel problem in quantum chemistry.

A Filtration Perspective on Minimal Indices

D. Steven Mackey

In addition to eigenvalues, singular matrix polynomialsalso possess scalar invariants called “minimal indices”that are significant in a number of application areas,such as control theory and linear systems theory. Inthis talk we discuss a new approach to the definition ofthese quantities based on the notion of a filtration of avector space. The goal of this filtration formulation isto provide new tools for working effectively with mini-mal indices, as well as to gain deeper insight into theirfundamental nature. Several illustrations of the use ofthese tools will be given.

An overview of Krylov methods for nonlineareigenvalue problems

Karl Meerbergen, Roel Van Beeumen, Wim Michiels

This talk is about the solution of a class of non-lineareigenvalue problems. Krylov and Rational Krylov meth-ods are known to be efficient and reliable for the solu-tion of linear and polynomial eigenvalue problems. Inearlier work, we suggested to approximate the nonlin-ear matrix function by interpolating polynomials whichled to the Taylor-Arnoldi, infinite Arnoldi and Newton-Hermite Rational Krylov methods. These methods areflexible in that the number of iterations and the in-terpolation points can be chosen during the executionof the algorithm. All these methods have in commonthat they interpolate, i.e., they match moments of a re-

68

Page 71: CMR17 - ILAS 2013

lated dynamical system with nonlinear dependence onthe eigenvalue parameter. The difference lies in a differ-ent innerproduct that is used for an infinite dimensionalproblem. Numerical experiments showed that the im-pact of a specific innerproduct is hard to understand.This leads to the conclusion that we better project thenonlinear eigenvalue problem on the subspace of the mo-ment vectors, leading to a small scale nonlinear eigen-value problem. The disadvantage is that we again haveto solve a nonlinear eigenvalue problem with small ma-trices this time. We will again use interpolating polyno-mials within a rational Krylov framework for the smallscale problem. We then give an overview of differentpossibilities for restarting interpolation based Krylovmethods, which allows for computing invariant pairs ofnonlinear eigenvalue problems. We will also formulateopen questions, both small and large scale problems.

Skew-symmetry - a powerful structure for ma-trix polynomials

Steve Mackey, Niloufer Mackey, Christian Mehl, VolkerMehrmann

We consider skew-symmetric matrix polynomials, i.e.,matrix polynomials, where all coefficient matrices areskew-symmetric.

In the talk, we develop a skew-symmetric Smith formfor skew-symmetric matrix polynomials under structurepreserving unimodular equivalence transformations. Fromthis point of view, the structure of being skew-symmetricproves to be powerful, because analogous results forother structured matrix polynomials are not available.

We will apply the so-called skew-Smith form to deducea result on the existence of skew-symmetric strong lin-earizations of regular skew-symmetric matrix polyno-mials. This is in contrast to other important struc-tures, like T-alternating and T-palindromic polynomi-als, where restrictions in the elementary divisor struc-ture of matrix polynomials and matrix pencils havebeen observed leading to the effect that structured stronglinearizations of such structured matrix polynomials neednot always exist.

Further applications of the skew-Smith form presentedin the talk include the solution of a problem on theexistence of symmetric factorizations of skew-symmetricrational functions.

Vector spaces of linearizations for matrix poly-nomials: a bivariate polynomial approach

Alex Townsend, Vanni Noferini, Yuji Nakatsukasa

We revisit the important paper [D. S. Mackey, N. Mackey,C. Mehl and V. Mehrmann, SIAM J. Matrix Anal. Appl.,28 (2005), pp. 971–1004] and by viewing matrices ascoefficients for bivariate polynomials we provide con-cise proofs for key properties of linearizations of matrixpolynomials. We also show that every pencil in the dou-ble ansatz space is intrinsically connected to a Bezout

matrix, which can be used to prove the eigenvalue exclu-sion theorem. Furthermore, we exploit this connectionin order to obtain a new result on the Kronecker canon-ical form of DL(P, v) in the case where this pencil is nota linearization.

In addition, our exposition allows for any degree-gradedbasis, the monomials being a special case.

Duality of matrix pencils, singular pencils andlinearizations

Vanni Noferini, Federico Poloni

We consider a new relation among matrix pencils, du-ality: two pencils xL1 + L0 and xR1 + R0 are said tobe duals if L1R0 = R1L0, plus a rank condition. Du-ality transforms Kronecker blocks in a predictable way,and can be used as a theoretical tool to derive results indifferent areas of the theory of matrix pencils and lin-earizations. We discuss Wong chains, a generalization ofeigenvalues and Jordan chains to singular pencils, andhow they change under duality. We use these tools toderive a result on the possible minimal indices of Hamil-tonian and symplectic pencils, to study vector spaces oflinearizations as in [Mackey2, Mehrmann, Mehl 2006],and to revisit the proof that Fiedler pencils associatedto singular matrix polynomials are linearizations [DeTeran, Dopico, Mackey 2009].

Exploiting low rank of damping matrices usingthe Ehrlich-Aberth method

Leo Taslaman

We consider quadratic matrix polynomialsQ(λ) =Mλ2+Dλ + K corresponding to vibrating systems with lowrank damping matrix D. Our goal is to exploit thelow rank property of D to compute all eigenvalues ofQ(λ) more efficiently than conventional methods. Tothis end, we use the Ehrlich-Aberth method recently in-vestigated by Bini and Noferini for computing the eigen-values of matrix polynomials. For general matrix poly-nomials, the Ehrlich-Aberth method computes eigenval-ues with good accuracy but is unfortunately relativelyslow when the size of the matrix polynomial is large andthe degree is low. We revise the algorithm and exploitthe special structure of our systems to push down thebulk sub-computation from cubic to linear time (in ma-trix size) and obtain an algorithm that is both fast andaccurate.

A Review of Nonlinear Eigenvalue Problems

Francoise Tisseur

In its most general form, the nonlinear eigenvalue prob-lem is to find scalars λ (eigenvalues) and nonzero vectorsx and y (right and left eigenvectors) satisfying N(λ)x =0 and y∗N(λ) = 0, where N : Ω → C is an analyticfunction on an open set Ω ⊆ C. In practice, the matrixelements are most often polynomial, rational or expo-

69

Page 72: CMR17 - ILAS 2013

nential functions of λ or a combination of these. Theseproblems underpin many areas of computational scienceand engineering. They can be difficult to solve due tolarge problem size, ill conditioning (which means thatthe problem is very sensitive to perturbations and hencehard to solve accurately), or simply a lack of good nu-merical methods. My aim is to review the recent re-search directions in this area.

The Inverse Symmetric Quadratic Eigenvalue Prob-lem

Ion Zaballa, Peter Lancaster

The detailed spectral structure of symmetric quadraticmatrix polynomials has been developed recently. Theircanonical forms are shown to be useful to provide adetailed analysis of inverse problems of the form: con-struct the coefficients of a symmetric quadratic matrixpolynomial from the spectral data including the classi-cal eigenvalue/eigenvector data and sign characteristicsfor the real eigenvalues. An orthogonality condition de-pendent on these signs plays a vital role in this con-struction. Special attention is paid to the cases whenthe leading and trailing coefficients are prescribed to bepositive definite. A precise knowledge of the admissi-ble sign characteristics of such matrix polynomials isrevealed to be important.

Randomized Matrix Algorithms

A Randomized Asynchronous Linear Solver withProvable Convergence Rate

Haim Avron, Alex Druinsky, Anshul Gupta

Asynchronous methods for solving systems of linear equa-tions have been researched since Chazan and Mirankerpublished their pioneering paper on chaotic relaxationin 1969. The underlying idea of asynchronous methodsis to avoid processor idle time by allowing the proces-sors to continue to work and make progress even if notall progress made by other processors has been commu-nicated to them.

Historically, work on asynchronous methods for solvinglinear equations focused on proving convergence in thelimit. How the rate of convergence compares to therate of convergence of the synchronous counterparts,and how it scales when the number of processors in-crease, was seldom studied and is still not well under-stood. Furthermore, the applicability of these methodswas limited to restricted classes of matrices (e.g., diag-onally dominant matrices).

We propose a shared-memory asynchronous method forgeneral symmetric positive definite matrices. We rig-orously analyze the convergence rate and prove that itis linear and close to that of our method’s synchronouscounterpart as long as not too many processors are used(relative to the size and sparsity of the matrix). A keycomponent is randomization, which allows the proces-

sors to make guaranteed progress without introducingsynchronization. Our analysis shows a convergence ratethat is linear in the condition number of the matrix, anddepends on the number of processors and the degree towhich the matrix is sparse.

Randomized low-rank approximations, in theoryand practice

Alex Gittens

Recent research has sharpened our understanding ofcertain classes of low-rank approximations formed us-ing randomization. In this talk, we consider two classesof such approximants: those of general matrices formedusing subsampled fast unitary transformations, and Nys-trom extensions of SPSD matrices. We see that despitethe fact that the available bounds are often optimal ornear-optimal in the worst case, in practice these meth-ods are much better behaved than the bounds predict.We consider possible explanations for this gap, and po-tential ways to incorporate assumptions on our datasetsto further increase the predictive value of the theory.

Accuracy of Leverage Score Estimation

Ilse Ipsen, John Holodnak

Leverage scores were introduced in 1978 by Hoaglinand Welsch for outlier detection in statistical regres-sion analysis. Starting about ten years ago, Mahoneyet al. pioneered the use of leverage scores for impor-tance sampling in randomized algorithms for matrixcomputations. Subsequently they developed sketching-based algorithms for estimating leverage scores, in or-der to reduce the operation count of O(mn2) for a m×n full-column rank matrix. We present tighter errorbounds for the leverage approximations produced bythese sketching-based algorithms.

Implementing Randomized Matrix Algorithmsin Parallel and Distributed Environments

Michael Mahoney

Motivated by problems in large-scale data analysis, ran-domized algorithms for matrix problems such as re-gression and low-rank matrix approximation have beenthe focus of a great deal of attention in recent years.These algorithms exploit novel random sampling andrandom projection methods; and implementations ofthese algorithms have already proven superior to tra-ditional state-of-the-art algorithms, as implemented inLapack and high-quality scientific computing software,for moderately-large problems stored in RAM on a sin-gle machine. Here, we describe the extension of thesemethods to computing high-precision solutions in paral-lel and distributed environments that are more commonin very large-scale data analysis applications.

In particular, we consider both the Least Squares Ap-proximation problem and the Least Absolute Deviation

70

Page 73: CMR17 - ILAS 2013

problem, and we develop and implement randomized al-gorithms that take advantage of modern computer ar-chitectures in order to achieve improved communica-tion profiles. Our iterative least-squares solver, LSRN,is competitive with state-of-the-art implementations onmoderately-large problems; and, when coupled with theChebyshev semi-iterative method, scales well for solv-ing large problems on clusters that have high commu-nication costs such as on an Amazon Elastic ComputeCloud cluster. Our iterative least-absolute-deviationssolver is based on fast ellipsoidal rounding, random sam-pling, and interior-point cutting-plane methods; and wedemonstrate significant improvements over traditionalalgorithms on MapReduce. In addition, this algorithmcan also be extended to solve more general convex prob-lems on MapReduce.

Sign Pattern Matrices

Sign Vectors and Duality in Rational Realizationof the Minimum Rank

Marina Arav, Frank Hall, Zhongshan Li, Hein van derHolst

The minimum rank of a sign pattern matrix A is theminimum of the ranks of the real matrices whose entrieshave signs equal to the corresponding entries of A. Weprove that rational realization for anym×n sign patternmatrix A with minimum rank n− 2 is always possible.This is done using a new approach involving sign vectorsand duality. Furthermore, we show that for each integern > 12, there exists a nonnegative integer m such thatthere exists an n×m sign pattern matrix with minimumrank n−3 for which rational realization is not possible.

Sign patterns that require eventual exponentialnonnegativity

Craig Erickson

A real square matrix A is eventually exponentially non-negative if there exists a positive real number t0 suchthat for all t > t0, e

tA is an entrywise nonnegative ma-

trix where etA =∑∞k=0

tkAk

k!. A sign pattern A is a ma-

trix having entries in +,−, 0 and its qualitative classis the set of all real matrices A for which sgn(A) = A.A sign pattern A requires eventual exponential nonneg-ativity if every matrix in the qualitative class of A iseventually exponentially nonnegative. In this talk, wediscuss the structure of sign patterns that require even-tual exponential nonnegativity.

Sign patterns with minimum rank 3

Wei Gao, Yubin Gao, Fei Gong, Guangming Jing, Zhong-shan Li

A sign pattern (matrix) is a matrix whose entries arefrom the set +,−, 0. The minimum rank (respec-

tively, rational minimum rank) of a sign pattern matrixA is the minimum of the ranks of the real (respectively,rational) matrices whose entries have signs equal to thecorresponding entries of A. It is known that for eachsign pattern A = [aij ] with minimum rank 2, there isa rank 2 integer matrix A = [bij ] with sgn(bij) = aij .A sign pattern A is said to be condensed if A has nozero row or column and no two rows or columns areidentical or negatives of each other. In this talk, weestablish a new direct connection between condensedm×n sign patterns with minimum rank 3 and m point-n line configurations in the plane, and we use this con-nection to construct the smallest known sign patternwhose minimum rank is 3 but whose rational minimumrank is greater than 3. We also obtain other results inthis direction.

Sign patterns that require or allow generaliza-tions of nonnegativity

Leslie Hogben

A real square matrix A is eventually positive (nonnega-tive) if Ak is an entrywise positive (nonnegative) matrixfor all sufficiently large k. A matrix A is power-positiveif some positive integer power of A is entrywise posi-tive. A matrix A is eventually exponentially positiveif there exists some t0 > 0 such that for all t > t0,

etA =∑∞k=0

tkAk

k!> 0.

A sign pattern A requires a property P if every realmatrix A having sign pattern A has property P , andallows a property P if there is a real matrix A havingsign pattern A that has property P . This talk will sur-vey recent results about sign patterns that require orallow these various eventual properties that generalizenonnegativity.

Sign Patterns with minimum rank 2 and upperbounds on minimum ranks

Zhongshan Li, Wei Gao, Yubin Gao, Fei Gong, MarinaArav

A sign pattern (matrix) is a matrix whose entries arefrom the set +,−, 0. The minimum rank (resp., ra-tional minimum rank) of a sign pattern matrix A is theminimum of the ranks of the real (resp., rational) matri-ces whose entries have signs equal to the correspondingentries of A. The notion of a condensed sign pattern isintroduced. A new, insightful proof of the rational re-alizability of the minimum rank of a sign pattern withminimum rank 2 is obtained. Several characterizationsof sign patterns with minimum rank 2 are established,along with linear upper bounds for the absolute valuesof an integer matrix achieving the minimum rank 2. Aknown upper bound for the minimum rank of a (+,−)sign pattern in terms of the maximum number of signchanges in the rows of the sign pattern is substantiallyextended to obtain upper bounds for the rational min-imum ranks of general sign pattern matrices. The newconcept of the number of polynomial sign changes of asign vector is crucial for this extension. Another known

71

Page 74: CMR17 - ILAS 2013

upper bound for the minimum rank of a (+,−) signpattern in terms of the smallest number of sign changesin the rows of the sign pattern is also extended to allsign patterns using the notion of the number of strictsign changes. Some examples and open problems arealso presented.

Properties of Skew-Symmetric Adjacency Ma-trices

Judi McDonald

Given an graph on n vertices, a skew-symmetric ad-jacency matrix can be formed by choosing aij = ±1,whenever there is an edge from vertex i to vertex j,in such a way that aijaji = −1, and setting aij = 0otherwise. Thus there is a collection of skew-symmetricmatrices that can be associated with each graph. In thetalk, characteristics of this collection will be discussed.

Using the Nilpotent-Jacobian Method Over Var-ious Fields and Counterexamples to the Super-pattern Conjecture.

Timothy Melvin

The Nilpotent-Jacobian Method is a powerful tool usedto determine if a pattern (and its superpatterns) is spec-traly arbitrary over R. We will explore what informa-tion can be gleaned from this method when we considerpatterns over various fields including finite fields, Q, andalgebraic extensions of Q. We will also provide a num-ber of counterexamples to the Super Pattern Conjectureover the finite field F3.

Structure and Randomization in Matrix Com-putations

Accelerating linear system solutions using ran-domization

Marc Baboulin, Jack Dongarra

Recent years have seen an increase in peak “local” speedthrough parallelism in terms of multicore processorsand GPU accelerators. At the same time, the cost ofcommunication between memory hierarchies and/or be-tween processors have become a major bottleneck formost linear algebra algorithms. In this presentation wedescribe a randomized algorithm that accelerates thefactorization of general or symmetric indefinite systemson multicore or hybrid multicore+GPU systems. Ran-domization prevents the communication overhead dueto pivoting, is computationally inexpensive and requiresvery little storage. The resulting solvers outperform ex-isting routines while providing us with a satisfying ac-curacy.

The bisection method and condition numbers for

quasiseparable of order one matrices

Yuli Eidelman, Iulian Haimovici

We discuss the fast bisection method to compute allor selected eigenvalues of quasiseparable of order onematrices. The method is based essentially on fast eval-uation formulas for characteristic polinomials of princi-pal leading submatrices of quasiseparable of order onematrices obtained in [1]. In the implementation of themethod the problem of fast computing of conditionalnumbers for quasiseparable of order one matrices arisesin a natural way. The results of numerical tests arediscussed.

[1] Y. Eidelman I. Gohberg and V. Olshevsky, Eigen-structure of order-one-quasiseparable matrices. Three-term and two-term tecurrence relations, LAA, 405 (2005),1-40.

The unitary eigenvalue problem

Luca Gemignani

The talk is concerned with the computation of a con-densed representation of a unitary matrix. A blockLanczos procedure for the reduction of a unitary ma-trix into a CMV-like form is investigated. The reduc-tion is used as a pre-processing phase for eigenvaluecomputation by means of the QR method. Numericalresults and some potential applications to polynomialroot-finding will be discussed. This is a joint work withR. Bevilacqua and G. Del Corso.

Randomly sampling from orthonormal matrices:Coherence and Leverage Scores

Ilse Ipsen, Thomas Wentworth

We consider three strategies for sampling rows from ma-trices Q with orthonormal columns: Exactly(c) withoutreplacement, Exactly(c) with replacement, and Bernoullisampling.

We present different probabilistic bounds for the condi-tion numbers of the sampled matrices, and express themin terms of the coherence or the leverage scores of Q.We also present lower bounds for the number of sampledrows to attain a desired condition number. Numericalexperiments confirm the accuracy of the bounds, evenfor small matrix dimensions.

Inverse eigenvalue problems linked to rationalArnoldi, and rational (non)symmetric Lanczos

Thomas Mach, Marc Van Barel, Raf Vandebril

In this talk two inverse eigenvalue problems are dis-cussed. First, given the eigenvalues and a weight vec-tor a (extended) Hessenberg matrix is computed. Thismatrix represents the recurrences linked to a (rational)Arnoldi inverse problem. It is well known that the ma-trix capturing the recurrence coefficients is of Hessen-

72

Page 75: CMR17 - ILAS 2013

berg form in the standard Arnoldi case. Considering,however, rational functions, admitting finite as well asinfinite poles we will prove that the recurrence matrix isstill of a highly structured form – the extended Hessen-berg form. An efficient memory representation is pre-sented for the Hermitian case and used to reconstructthe matrix capturing the recurrence coefficients.

In the second inverse problem, the above setting is gen-eralized to the biorthogonal case. Instead of unitarysimilarity transformations, we drop the unitarity. Giventhe eigenvalues and the two first columns of the matri-ces determining the similarity, we want to retrieve thematrix of recurrences, as well as the matrices governingthe transformation.

Decoupling randomness and vector space struc-ture leads to high-quality randomized matrix al-gorithms

Michael Mahoney

Recent years have witnessed the increasingly sophisti-cated use of randomness as a computational resourcein the development of improved algorithms for funda-mental linear algebra problems such as least-squares ap-proximation, low-rank matrix approximation, etc. Thebest algorithms (in worst-case theory, in high-qualitynumerical implementation in RAM, in large-scale par-allel and distributed environments, etc.) are those thatdecouple, either explicitly in the algorithm or implicitlywithin the analysis, the Euclidean vector space struc-ture (e.g., the leverage score structure, as well as othermore subtle variants) from the algorithmic applicationof the randomness. For example, doing so leads to muchbetter a priori bounds on solution quality and runningtime, more flexible ways to parameterize algorithms tomake them appropriate for particular applications, aswell as structural conditions that can be checked a pos-teriori. Although this perspective is quite different thanmore traditional approaches to linear algebra, the care-ful use of random sampling and random projection hasalready led to the development of qualitatively differ-ent and improved algorithms, both for matrices arisingin more traditional scientific computing applications, aswell as for matrices arising in large-scale machine learn-ing and data analytics applications. This approach willbe described, and several examples of it will be given.

Transformations of Matrix Structures Work Again

Victor Pan

In 1989 we proposed to employ Vandermonde and Han-kel multipliers to transform into each other the ma-trix structures of Toeplitz, Hankel, Vandermonde andCauchy types as a means of extending any successful al-gorithm for the inversion of matrices having one of thesestructures to inverting the matrices with the structuresof the three other types. Surprising power of this ap-proach has been demonstrated in a number of works,which culminated in ingeneous numerically stable algo-rithms that approximated the solution of a nonsingular

Toeplitz linear system in nearly linear (versus previu-osly cubic) arithmetic time. We first revisit this power-ful method, covering it comprehensively, and then spe-cialize it to yield a similar acceleration of the known al-gorithms for computations with matrices having struc-tures of Vandermonde or Cauchy types. In particularwe arrive at numerically stable approximate multipointpolynomial evaluation and interpolation in nearly lineartime, by using O(bn logh n) flops where h = 1 for eval-uation, h = 2 for interpolation, and 2−b is the relativenorm of the approximation errors.

On the Power of Multiplication by Random Ma-trices

Victor Pan

A random matrix is likely to be well conditioned, andmotivated by this well known property we employ ran-dom matrix multipliers to advance some fundamentalmatrix computations. This includes numerical stabi-lization of Gaussian elimination with no pivoting as wellas block Gaussian elimination, approximation of theleading and trailing singular spaces of an ill conditionedmatrix, associated with its largest and smallest singularvalues, respectively, and approximation of this matrixby low-rank matrices, with extensions to the computa-tion and certification of the numerical rank of a matrixand its 2-by-2 block triangular factorization, where itsrank exceeds its numerical rank. We prove the efficiencyof the proposed techniques where we employ Gaussianrandom multipliers, but our extensive tests have con-sistently produced the same outcome where instead weused sparse and structured random multipliers, definedby much fewer random parameters than the number ofthe input entries.

Multilevel low-rank approximation precondition-ers

Yousef Saad, Ruipeng Li

A new class of methods based on low-rank approxima-tions which has some appealing features will be intro-duced. The methods handle indefiniteness quite welland are more amenable to SIMD computations, whichmakes them attractive for GPUs. The method is easilydefined for Symmetric Positive Definite model problemsarising from Finite Difference discretizations of PDEs.We will show how to extend to general situations usingdomain decomposition concepts.

Structured matrix polynomials and their signcharacteristic

Maha Al-Ammari, Steve Mackey, Yuji Nakatsukasa, FrancoiseTisseur

Matrix polynomials P (λ) = λmAm + · · · + λA1 + A0

with Hermitian or symmetric matrix coefficients Ai orwith coefficients which alternate between Hermitian andskew-Hermitian matrices, or even with coefficient ma-

73

Page 76: CMR17 - ILAS 2013

trices appearing in a palindromic way commonly arisein applications.

Standard and Jordan triples (X, J, Y ) play a centralrole in the theory of matrix polynomials with nonsin-gular leading coefficient. They extend to matrix poly-nomials the notion of Jordan pairs (X, J) for a singlematrix A, where A = X−1JX. Indeed, each matrix co-efficient Ai of P (λ) can be expressed in terms of X, Jand Y . We show that standard triples of structuredP (λ) have extra properties. In particular there is a non-singular matrix M such that MY = vS(J)X

∗, wherevS(J) depends on the structure S of P (λ) and J is M -selfadjoint when S ∈ Hermitian, skew-Hermitian,M -skew-adjoint when S ∈ ∗-even, ∗-odd and M -unitarywhen S ∈ palindromic, antipalindromic.

The property of J implies that its eigenvalues and there-fore those of P (λ) occur in pairs (λ, f(λ)) when λ 6=f(λ), where

f(λ) =

λ for M -self-adjoint J,

−λ for M -skew-adjoint J,

1/λ for M -unitary J.

The eigenvalues for which λ = f(λ), that is, those thatare not paired, have a sign +1 or −1 attached to themforming the sign characteristic of P (λ). We define thesign characteristic of P (λ) as that of the pair (J,M),show how to compute it and study its properties. Wediscuss applications of the sign characteristic in partic-ular in control systems, in the solution of structured in-verse polynomial eigenvalue problems and in the charac-terization of special structured matrix polynomials suchas overdamped quadratics, hyperbolic and quasidefinitematrix polynomials.

On solving indefinite least squares-type prob-lems via anti-triangular factorization

Nicola Mastronardi, Paul Van Dooren

The indefinite least squares problem and the equalityconstrained indefinite least squares problem are mod-ifications of the least squares problem and the equal-ity constrained least squares problem, respectively, in-volving the minimization of a certain type of indefinitequadratic form. Such problems arise in the solution ofTotal Least Squares problems, in parameter estimationand in H∞ smoothing. Algorithms for computing thenumerical solution of indefinite least squares and indefi-nite least squares with equality constraint are describedby Bojanczyk et al. and Chandrasekharan et al.

The indefinite least squares problem and the equalityconstrained indefinite least squares problem can be ex-pressed in an equivalent fashion as augmented squarelinear systems. Exploiting the particular structures ofthe coefficient matrices of such systems, new algorithmsfor computing the solution of such problems are pro-posed relying on the anti-triangular factorization of thecoefficient matrix (by Mastronardi et al.). Some resultson their stability are shown together with some numer-ical examples.

Randomized and matrix-free structured sparsedirect solvers

Jianlin Xia

We show how randomization and rank structures canbe used in the direct solution of large sparse linear sys-tems with nearly O(n) complexity and storage. Ran-domized sampling and related adaptive strategies helpsignificantly improve both the efficiency and flexibil-ity of structured solution techniques. We also demon-strate how these can be extended to the development ofmatrix-free direct solvers based on matrix-vector prod-ucts only. This is especially interesting for problemswith few varying parameters (e.g., frequency or shift).Certain analysis of the errors and stability will be given.Among many of the applications are the solutions oflarge discretized PDEs and some eigenvalue problems.

We also consider the issues of using the techniques intheses contexts:

1. Finding selected as well as arbitrary entries of theinverse of large sparse matrices, and the cost is O(n)for O(n) entries when the matrices arise from the dis-cretization of certain PDEs in both 2D and 3D.

2. The effectiveness of using these methods as incom-plete structured preconditioners or even matrix-free pre-conditioners.

Part of the work is joint with Yuanzhe Xi.

Structured Matrix Functions and their Applica-tions (Dedicated to Leonia Lerer on the occasionof his 70th birthday)

Structured singular-value analysis and structuredStein inequalities

Joseph Ball

For A an n × n matrix, it is well known that A isstable (i.e., I − zA is invertible fora ll z in the closedunit disk D) if and only if there exists a positive defi-nite solution X ≻ 0 to the strict Stein inequality X −A∗XA ≻ 0. Let us now specify a block decomposi-tion A =

[A11 A12A21 A22

]and say that A is structured sta-

ble if[In1 0

0 In2

]−[z1In1 0

0 z2In2

] [A11 A12A21 A22

]is invertible

for all (z1, z2) in the closed bidisk D × D. It is knownthat structured stability of A is implied by but in gen-eral does not imply the existence of a positive definitestructured solutionX =

[X1 00 X2

]to the Stein inequality

X−A∗XA ≻ 0. However, if one enhances the structureby replacing A by A⊗Iℓ2 , one does get an equivalence ofrobust stability with existence of a solution with a struc-tured LMI, i.e., the existence of a structured positivedefinite solution X to the Stein inequality X−A∗XA ≻0 is equivalent to the enhanced notion of robust stabil-

ity: Iℓ2n −[∆1 00 ∆2

] (A⊗ Iℓ2n

)invertible for all (∆1,∆2)

in the noncommutative closed bidisk BL(ℓ2n1)×BL(ℓ2n2

).These ideas are important in stability and performance

74

Page 77: CMR17 - ILAS 2013

analysis and synthesis for multidimensional systems aswell as for standard input/state/output linear systemscarrying a linear-fractional-transformation model of struc-tured uncertainty (structured singular value or µ-analysis—see [1]). We present here a unified framework for for-mulating and verifying the result which incorporatesnon-square block structure as well as arbitrary block-multiplicity. This talk discusses joint work with GilbertGroenewald and Sanne ter Horst of North-West Univer-sity, Potchefstroom, South Africa.

Zero sums of idempotents and Banach algebrasfailing to be spectrally regular

Harm Bart

A basic result from complex function theory states thatthe contour integral of the logarithmic derivative of ascalar analytic function can only vanish when the func-tion has no zeros inside the contour. The question dealtwith here is: does this result generalize to the vector-valued case? We suppose the function takes values in aBanach algebra. In case of a positive answer, the Ba-nach algebra is called spectrally regular. There are manyof such algebras. In this presentation, the focus will beon negative answers so on Banach algebras failing to bespectrally regular. These have been identified via theconstruction of non-trivial zero sums of a finite num-ber of idempotents. It is intriguing that we need onlyfive idempotents in all known examples. The idempo-tent constructions relate to deep problems concerningthe geometry of Banach spaces and general topology.

The talk is a report on work done jointly with TorstenEhrhardt (Santa Cruz) and Bernd Silbermann (Chem-nitz).

An overview of Leonia Lerer’s work

Marinus Kaashoek

In this talk I will present on overview of some of LeoniaLerer’s many contributions to matrix and operator the-ory. The emphasis will be on Szego-Krein orthogonalmatrix polynomials and Krein orthogonal entire matrixfunctions.

Trace formulas for truncated block Toeplitz op-erators

David Kimsey

The strong Szego limit theorem may be formulated interms of operators of the form

(PNGPN )n − PNGnPN , for n = 1, 2, . . . ,

where G denotes the operator of multiplication by asuitably restricted d × d mvf (matrix-valued function)acting on the space of d × 1 vvf’s (vector-valued func-tions) f that meet the constraint

∫ 2π

0f(eiθ)∗∆(eiθ)f(eiθ)dθ

< ∞ with ∆(eiθ) = Id and PN denotes the orthogonal

projection onto the space of vvf’s f with Fourier seriesf(eiθ) =

∑Nk=−N e

ikθfk that are subject to the samesummability constraint. We shall study these operatorsfor a class of positive definite mvf’s ∆, which admit thefactorization ∆(eiθ) = Q(eiθ)∗Q(eiθ) = R(eiθ)R(eiθ)∗,where Q±1 and R±1 belong to a certain algebra. Weshow that the limit

κn(G) = limN↑∞

trace(PNGPN )n − PNGnPN

exists and by may evaluated using ∆(eiθ) = Id, whenGQ = QG and R∗G = R∗G. Moreover, when G pos-sesses a special factorization,

κn(G) = −nd

n−1∑

k=1

∞∑

j=1

j

(trace([Gk]j)

k

)(trace([Gn−k]−j)

n− k

).

We will also discuss analogous results for operatorswhich are connected to a continuous analog of the strongSzego limit theorem.

This talk is based on joint work with Harry Dym.

Norm asymptotics for a special class of Toeplitzgenerated matrices with perturbations

Hermann Rabe

Matrices of the form Xn = Tn + cn(T−1n )∗, where Tn is

an invertible banded Toeplitz matrix, and cn a sequenceof positive real numbers are considered. The normsof their inverses are described asymptotically as their(n×n) sizes increase. Certain finite rank perturbationsof these matrices are shown to have no effect on theirasymptotic behaviour.

One sided invertibility, corona problems, andToeplitz operators

Leiba Rodman

We study Fredholm properties of Toeplitz operatorswith matrix symbols acting on Hardy space of the up-per halfplane, in relation to the Fredholm properties ofToeplitz operators with scalar determinantal symbols.The study is based on analysis of one-sided invertibilityin the abstract framework of unital commutative ring,and of Riemann - Hilbert problem in connection to pairsof matrix functions that satisfy the corona condition.The talk is based on joint work with M.C. Camara andI.M. Spitkovsky.

Discrete Dirac systems and structured matrices

Alexander Sakhnovich

We show that discrete Dirac system is equivalent toSzego recurrence. Discrete Dirac system is also of in-dependent interest. We present Weyl theory for dis-crete Dirac systems with rectangular matrix ”poten-tials”. Block structured matrices are essential for solu-tion of the corresponding inverse problems. The results

75

Page 78: CMR17 - ILAS 2013

were obtained jointly with B. Fritzsche, B. Kirstein andI. Roitberg.

On almost normal matrices

Tyler Moran , Ilya Spitkovsky

Let an n-by-nmatrix A be almost normal in a sense thatit has n− 1 orthogonal eigenvectors. The properties ofits numerical rangeW (A) and Aluthge transform ∆ areexplored. In particular, it is proven that for unitarilyirreducible almost normal A, W (A) cannot have flatportions on the boundary and ∆(A) is not normal (thelatter, under the additional conditions that n > 2 andA is invertible). In passing, the unitary irreducibilitycriterion for A, almost normality criterion for A∗, andthe rank formula for the self-commutator A∗A − AA∗

are established.

Toeplitz plus Hankel (and Alternating Hankel)

Gilbert Strang

A symmetric tridiagonal Toeplitz matrix T has the strongTH property:

All functions of T can be written as a Toeplitz matrixplus a Hankel matrix.

We add two families of matrices to this class:

1. The entries in the four corners of T can be arbi-trary (with symmetry)

2. T can be completed to have a tridiagonal inverse(Kac-Murdock-Szego matrices have entries cR|i−j|

with rank 1 above the diagonal)

The antisymmetric centered difference matrix D has aparallel property except that its antidiagonals alternatein sign (alternating Hankel). All functions of D areToeplitz plus Alternating Hankel. We explore this sub-space of matrices. The slowly varying case (not quiteToeplitz) is also of interest.

Determinantal representations of stable polyno-mials

Hugo Woerdeman

For every stable multivariable polynomial p, with p(0) =1, we construct a determinantal representation

p(z) = det(In −M(z)),

where M(z) is a matrix vauled rational function with‖M(z)‖ 6 1 and ‖M(z)n‖ < 1 for z ∈ Td and M(az) =aM(z) for all a ∈ C \ 0.

Symbolic Matrix Algorithms

Reducing memory consumption in Strassen-likematrix multiplication

Brice Boyer, Jean-Guillaume Dumas

Strassen–Winograd matrix multiplication algorithm isefficient in practice but needs temporary matrix alloca-tions. We present techniques and tools used to reducethat need for extra memory in Strassen-like matrix-matrix multiplication, but also in the product with ac-cumulation variant. The effort is put in keeping a timecomplexity close to Strassen–Winograd’s (i.e. 6nlog2(7)−cn2+o(n2)) while using less extra space than what wasproposed in the literature. We finally show experimen-tal results and comparisons.

Early Termination over Small Fields: A Consid-eration of the Block Case

Wayne Eberly

The incorporation of an “early termination” heuristicas part of a Wiedemann-style algorithm for a matrixcomputation can significantly reduce the cost to applythe algorithm on some inputs. The matrix propertiesthat suffice to prove the reliability of such a heuristic— which also serve to bound the cost to use a Lanczos-style algorithm instead — hold generically. This makesit challenging to describe an input for which such aheuristic will not be reliable. Indeed, I have believedfor several years that no such input exists. That is, Ibelieve that these heuristics are reliable for all inputmatrices, over all fields, and using all blocking factors.

While it would still be necessary to condition inputsin order to achieve other matrix properties, related tothe particular computation that is being performed, aproof of such a result would eliminate the need to con-dition (or choose blocking factors) in order to ensurethe reliability of this kind of heuristic.

In this talk, I will discuss ongoing work to obtain sucha proof.

Outlier detection by error correcting decodingand structured linear algebra methods

Erich Kaltofen

Error-correcting decoding is generalized to multivariatesparse polynomial and rational function interpolationfrom evaluations that can be numerically inaccurate andwhere several evaluations can have severe errors (“out-liers”).

Our univariate solution for numerical Prony-Blahut /Giesbrecht-Labahn-Lee sparse interpolation can iden-tify and remove outliers by a voting scheme.

Our multivariate polynomial and rational function in-terpolation algorithm combines Zippel’s symbolic sparsepolynomial interpolation technique [Ph.D. Thesis MIT1979] with the numeric algorithm by Kaltofen, Yang,and Zhi [Proc. SNC 2007], and removes outliers (“cleans

76

Page 79: CMR17 - ILAS 2013

up data”) by techniques from the Berlekamp/Welch de-coder for Reed-Solomon codes.

Our algorithms can build a sparse function model froma number of evaluations that is linear in the sparsity ofthe model, assuming a constant number of ouliers.

This is joint work with Matthew Comer (NCSU), ClementPernet (Univ. Grenoble), and Zhengfeng Yang (ECNU,Shanghai).

Applications of fast nullspace computation forpolynomial matrices

George Labahn, Wei Zhou

In this talk we give a number of fast, deterministic algo-rithms for polynomial matrix problems. The algorithmsinclude algorithms for computing matrix inverses, col-umn bases, determinant and matrix normal forms suchas Hermite and Popov. These algorithms all dependon fast algorithms for order bases (Zhou and Labahn inISSAC 2009) and nullspace bases (Zhou, Labahn, Stor-johann in ISSAC 2012). The computational costs areall similar to the cost of multiplying matrices of thesame size as the input and having degrees equal to theaverage column degree of the input matrix.

Polynomial Evaluation and Interpolation: Fastand Stable Approximate Solution

Victor Pan

Multipoint polynomial evaluation and interpolation arefundamental for modern algebraic and numerical com-puting. The known algorithms solve both problems overany field by using O(N log2N) arithmetic operationsfor the input of size N , but the cost grows to quadraticfor numerical solution. Our study results in numericallystable algorithms that use O(uN logN) arithmetic timefor approximate evaluation (within the relative outputerror norm 2−u) and O(uN log2N) time for approx-imate interpolation. The problems are equivalent tomultiplication of an n × n Vandermonde matrix by avector and the solution of a nonsingular Vandermondelinear systems of n equations, respectively. The algo-rithms and complexity estimates can be applied in bothcases as well as where the transposed Vandermonde ma-trices replace Vandermonde matrices. Our advance isdue to employing and extending our earlier method ofthe transformation of matrix structures, which enablesapplication of the HSS–Multipole method to our tasksand further extension of the algorithms to more generalclasses of structured matrices.

Wanted: reconditioned preconditioners

David Saunders

Probabilistic preconditioners play a central role in black-box methods for exact linear algebra. They typicallyintroduce runtime and space cost factors that are log-

arithmic in the matrix dimension. This applies for ex-ample to blackbox algorithms for rank, solving systems,sampling nullspace, minimal polynomial.

However, the worst case scenarios necessitating the pre-conditioner overhead are rarely encountered in practice.This talk is a call for stronger and more varied analysesof preconditioners so that the practitioner has adequateinformation to engineer an effective implementation. Iwill offer a mixture of results, partial results, experimen-tal evidence, and open questions illustrating several ap-proaches to refining preconditioners and their analyses.I will discuss the block Wiedemann algorithm, diago-nal preconditioners, and, if time permits, linear spacepreconditioners for rectangular matrices.

Computing the invariant structure of integer ma-trices: fast algorithms into practice

Arne Storjohann, Colton Pauderis

A lot of information about the invariant structure ofa nonsingular integer matrix can be obtained by look-ing at random projections of the column space of therational inverse. In this talk I describe an algorithmthat uses such projections to compute the determinantof a nonsingular n × n matrix. The algorithm can beextended to compute the Hermite form of the input ma-trix. Both the determinant and Hermite form algorithmcertify correctness of the computed results. Extensiveempirical results from a highly optimized implemen-tation show the running time grows approximately asn3 log n, even for input matrices with a highly nontrivialSmith invariant structure.

Euclidean lattice basis reduction: algorithms andexperiments for disclosing integer relations

Gilles Villard

We review polynomial time approaches for computingsimultaneous integer relations among real numbers. Avariant of the LLL lattice reduction algorithm (A. Lenstra,H. Lenstra, L. Lovasz, 1982), the HJLS (J. Hastad, B.Just, J. Lagarias, J. and C.P. Schnorr, 1989) and thePSLQ (H. Ferguson, D. Bailey, 1992) algorithms are defacto standards for solving the problem. We investigatethe links between the various approaches and present in-tensive experiment results. We especially focus on thequestion of the precision for the underlying floating-point procedures used in the currently fastest knownalgorithms and software libraries. Part of this work isdone in collaboration with D. Stehle in relation withthe fplll library, and with D. Stehle and J. Chen for theHJLS/PSLQ comparison.

Contributed Session on Algebra and Matricesover Arbitrary Fields Part 1

Commutativity preserving maps via maximal cen-

77

Page 80: CMR17 - ILAS 2013

tralizers

Gregor Dolinar, Alexander Guterman, Bojan Kuzma,Polona Oblak

Mappings on different spaces which preserve commuta-tivity and have some additional properties were studiedby many authors. Usually one of these additional prop-erties was linearity. Recently Semrl characterized bijec-tive maps on complex matrices that preserve commuta-tivity in both directions without a linearity assumption.

We will present the characterization of bijective mapspreserving commutativity in both directions on matri-ces over algebraically non-closed fields thus extendingSemrl’s result. We obtained our result using recentlypublished characterizations of matrices with maximaland with minimal centralizers over arbitrary fields withsufficiently many elements.

Paratransitive algebras of linear transformationsand a spatial implementation of “Wedderburn’sPrincipal Theorem”

Leo Livshits, Gordon MacDonald, Laurent Marcoux,Heydar Radjavi

We study a natural weakening – which we refer to asparatransitivity – of the well-known notion of transitiv-ity of an algebra A of linear transformations acting ona finite-dimensional vector space V. Given positive in-tegers k and m, we shall say that such an algebra A is(k,m)-transitive if for every pair of subspaces W1 andW2 of V of dimensions k and m respectively, we haveAW1∩W2 6= 0. We consider the structure of minimal(k,m)-transitive algebras and explore the connection ofthis notion to a measure of largeness for invariant sub-spaces of A.

To discern the structure of the paratransitive algebraswe develop a spatial implementation of “Wedderburn’sPrincipal Theorem”, which may be of interest in itsown right: if a subalgebra A of L(V) is represented byblock-upper-triangular matrices with respect to a maxi-mal chain of its invariant subspaces, then after an appli-cation of a block-upper-triangular similarity, the block-diagonal matrices in the resulting algebra comprise itsWedderburn factor. In other words, we show that, upto a block-upper-triangular similarity, A is a linear di-rect sum of an algebra of block-diagonal matrices andan algebra of strictly block-upper-triangular matrices.

On extremal matrix centralizers

Gregor Dolinar, Alexander Guterman, Bojan Kuzma,Polona Oblak

Centralizers of matrices induce a natural preorder, whereA B if the centralizer of A is included in the cen-tralizer of B. Minimal and maximal matrices in thispreorder are of special importance.

In the talk, we first recall known equivalent character-izations of maximal matrices and also of minimal ma-

trices over algebraically closed fields. Then we presentanalogue characterizations for maximal matrices overarbitrary fields and minimal matrices over fields withsufficiently large cardinality.

Adjacency preserving maps

Peter Semrl

Two m×n matrices A and B are adjacent if rank (B−A) = 1. Hua’s fundamental theorem of geometry ofrectangular matrices describes the general form of bijec-tive maps on the set of allm×n matrices preserving ad-jacency in both directions. Some recent improvementsof this result will be presented.

Contributed Session on Algebra and Matricesover Arbitrary Fields Part 2

On trivectors and hyperplanes of symplectic dualpolar spaces

Bart De Bruyn, Mariusz Kwiatkowski

Let V be a vector space of even dimension 2n > 4 overa field F and let f be a nondegenerate alternating bi-linear form on V . The symplectic group Sp(V, f) ∼=Sp(2n,F) associated with the pair (V, f) has a naturalaction on the n-th exterior power

∧n V of V . Two ele-ments χ1 and χ2 of

∧n V are called Sp(V, f)-equivalentif there exists a θ ∈ Sp(V, f) such that χθ1 = χ2. TheSp(V, f)-equivalence classes which arise from this groupaction can subsequently be merged to obtain the so-called quasi-equivalence classes. With the pair (V, f)there is also associated a point-line geometry DW (2n−1,F) (called a symplectic dual polar space) and thesequasi-equivalence classes form the basis of studying theso-called hyperplanes of that geometry. With a hyper-plane we mean a set of points of the geometry meetingeach line in either a singleton or the whole line.

The talk will focus on the case n = 3. In the specialcase that n = 3 and F is an algebraically closed fieldof characteristic distinct from 2, a complete classifica-tion of the Sp(V, f)-equivalence classes was obtained byPopov. In the talk, I will present a complete classifica-tion which is valid for any field F. This (long) classifica-tion was realized in a number of papers, ultimately re-sulting in a description using 28 families (some of whichare parametrized by elements of F). If there is stilltime, I will also discuss the classification of the quasi-Sp(V, f)-equivalence classes and some applications tohyperplanes of DW (2n− 1,F).

Uniform Kronecker Quotients

Yorick Hardy

Leopardi introduced the notion of a Kronecker quotientin 2005[1]. We consider the basic properties that a Kro-necker quotient should satisfy and completely charac-

78

Page 81: CMR17 - ILAS 2013

terize a class of Kronecker quotients, namely uniformKronecker quotients.

Not all properties of Kronecker products have corre-sponding properties for Kronecker quotients which aresatisfiable. Results regarding some of these propertiesfor uniform Kronecker quotients will be presented. Fi-nally, some examples of uniform Kronecker quotientswill be described.

[1] “A generalized FFT for Clifford algebras”, P. Leop-ardi, Bulletin of the Belgian Mathematical Society, 11,663–688, 2005.

Partial matrices of constant rank over small fields

Rachel Quinlan

In a partial matrix over a field F , entries are either spec-ified elements of F or indeterminates. Indeterminates indifferent positions are independent, and a completion ofthe partial matrix may be obtained by assigning a valuefrom F to each indeterminate. A number of recent re-search articles have investigated partial matrices whosecompletions all have the same rank or satisfy specifiedrank bounds.

We consider the following question: if all completionsof a partial m×n matrix A over a field F have the samerank r, must A possess a r×r sub(partial)matrix whosecompletions are all non-singular? This question can beshown to have an affirmative answer if the field F hasat least r elements. This talk will concentrate on thecase where this condition is not satisfied. We will showthat the question above can have a negative answer if|F | < r and max(m,n) > 2|F |.

Permanents, determinants, and generalized com-plementary basic matrices

Miroslav Fiedler, Frank Hall, Mikhail Stroev

In this talk we answer the questions posed in the articleA note on permanents and generalized complementarybasic matrices, Linear Algebra Appl. 436 (2012), by M.Fiedler and F. Hall. Further results on permanent com-pounds of generalized complementary basic matrices areobtained. Most of the results are also valid for the deter-minant and the usual compound matrix. Determinantand permanent compound products which are intrin-sic are also considered, along with extensions to totalunimodularity.

Contributed Session on Computational Science

Multigrid methods for high-dimensional prob-lems with tensor structure

Matthias Bolten

Tensor structure is present in many applications, rang-ing from the discretization of possibly high dimensionalpartial differential equations to structured Markov chains.Many approaches have been studied recently to solvethe associated systems, including Krylov subspace meth-ods.

In this talk, we will present multigrid methods that canbe used for certain structured problems. This is of in-terest when not only the dimension d is large but alsothe basis n, where the number of degrees of freedomsis given by N = nd. If the problem has d-level cir-culant or Toeplitz structure, the theoretical results formultigrid methods for these classes of matrices hold andfurther an efficient implementation for low-rank approx-imations is possible. Theoretical considerations as wellas numerical results are presented.

Discrete Serrin’s Problem

Cristina Arauz, Angeles Carmona, Andres Encinas

In 1971, J. Serrin solved the following problem on Po-tential Theory: If Ω is a smooth bounded open con-nected domain in the Euclidean space for which thesolution u to the Dirichlet problem ∆u = −1 in Ω andu = 0 on ∂Ω has overdetermined boundary condition∂u

∂n= constant on ∂Ω, then Ω is a ball and u has ra-

dial symmetry. This result has multiple extensions, andwhat is more interesting, the methods used for solvingthe problem have applications to study the symmetryof the solutions of very general elliptic problems.

We consider here the discrete analogue problem. Specif-ically, given Γ = (V,E) a graph and F ⊂ V a con-nected subset, the solution of the Dirichlet problemL(u) = 1 on F and u = 0 on δ(F ) is known as equi-librium measure. Therefore, the discrete Serrin’s prob-lem can be formulated as: does the additional condition∂u

∂n= constant on δ(F ) imply that F is a ball and u is

radial? We provide some examples showing that the an-swer is negative. However, the values of the equilibriummeasure depend on the distance to the boundary andhence we wonder if imposing some symmetries on thedomain, the solution to Serrin’s problem is radial. Theconclusion is true for many families of graph with highdegree of symmetry, for instance for distance–regulargraphs or spider networks.

A New Algebraic Approach to the Constructionof Multidimensional Wavelet Filter Banks

Youngmi Hur, Fang Zheng

In this talk we present a new algebraic approach forconstructing the wavelet filter bank. Our approach en-ables constructing nonseparable multidimensional non-redundant wavelet filter banks with a prescribed num-ber of vanishing moments. Our construction methodworks under a very general setting–it works for any spa-tial dimension and for any sampling matrix–and, at thesame time, it does not require the initial lowpass filters

79

Page 82: CMR17 - ILAS 2013

to satisfy any additional assumption such as interpola-tory condition. In fact, our method is general enough toinclude some existing construction methods that workonly for interpolatory lowpass filters, such as the mul-tidimensional construction method developed using thetraditional lifting scheme, as special cases.

A new method for solving completely integrablePDEs

Andrey Melnikov

We will demonstrate a new method for solving (some)completely integrable PDEs. At the basic core-stonelevel this method include an analogue of the ”scatteringtheory”, which is implemented using a theory of vessels.Evolving such vessels we create solutions of completelyintegrable PDEs. For example, KdV and evolutionaryNLS equation are the same evolutionary vessel for dif-ferent vessel parameters. Using more sophisticated con-struction, we solve the Boussinesq equation.

This theory has connection to Zacharov-Shabath scheme,and effectively generalizes this scheme. We will see ex-amples of classical solutions for different equations, im-plemented in the theory of vessels and some new results.

Green matrices associated with Generalized Lin-ear Polyominoes

Angeles Carmona, Andres Encinas, Margarida Mitjana

A Polyomino is an edge–conected union of cells in theplanar square lattice. Polyominoes are very popularin mathematical recreations, and have found interestamong mathematicians, physicists, biologists, and com-puter scientists as well. Because the chemical constitu-tion of a molecule is conventionally represented by amolecular graph or network, the polyominoes have de-served the attention of the Organic Chemistry commu-nity. So, several molecular structure descriptors basedin network structural descriptors, have been introduced.In particular, in the last decade a great amount of worksdevoted to calculate the Kirchhoff Index of linear poly-ominoes–like networks, have been published. In thiswork we deal with this class of polyominoes, that wecall generalized linear polyominoes, that besides themost popular class of linear polyomino chains, also in-cludes cycles, Phenylenes and Hexagonal chains to nameonly a few. Because the Kirchhoff Index is the traceof the Green function of the network, here we obtainthe Green function of generalized linear Polyominoes.To do this, we understand a Polyomino as a perturba-tion of a path by adding weighted edges between op-posite vertices. Therefore, unlike the techniques usedby Yang and Zhang in 2008, that are based on thedecomposition of the combinatorial Laplacian in struc-tured blocks, here we obtain the Green function of alinear Polyomino from a perturbation of the combina-torial Laplacian. This approach deeply link linear Poly-omino Green functions with the inverseM–matrix prob-lem and specially, with the so–called Green matrices, ofGantmacher and Krein (1941).

A Model Reduction Algorithm for SimulatingSedimentation Velocity Analysis

Hashim Saber

An algorithm for the construction of a reduced modelis developed to efficiently simulate a partial differentialequation with distributed parameters. The algorithmis applied to the Lamm equation, which describes thesedimentation velocity experiment. It is a large scale in-verse model that is costly to evaluate repeatedly. More-over, its high-dimensional parametric input space com-pounds the difficulty of effectively exploring the simula-tion process. The proposed parametric model reductionis applied to the simulating process of the sedimentationvelocity experiment. The model is treated as a systemwith sedimentation and diffusion parameters to be pre-served during model reduction. Model reduction allowsus to reduce the simulation time significantly and, atthe same time, it maintains a high accuracy.

Contributed Session on Graph Theory

A characterization of spectrum of a graph andof its vertex deleted subgraphs

Milica Andelic, Slobodan Simic, Carlos Fonseca

Let G be a graph and A its adjacency matrix. It iswell known the relation between the multiplicity of λas an eigenvalue of A and of the principal submatrix ofA obtained by deleting row and column indexed by avertex of G. Here, we provide characterizations of thethree possible types of vertices of G.

Perturbations of Discrete Elliptic operators

Angeles Carmona, Andres Encinas, Margarida Mitjana

Given V a finite set, a self–adjoint operator K is calledelliptic if it is positive semi–definite and its lowest eigen-value is simple. Therefore, there exists a unique, up tosign, unitary function ω ∈ C(V ) satisfying K(ω) = λωand then K is named (λ, ω)–elliptic. Clearly, a (λ, ω)–elliptic operator is singular iff λ = 0. Examples of ellip-tic operators are the so–called Schrodinger operators onfinite connected networks, as well as the signless Lapla-cian of connected bipartite graphs.

If K is a (λ, ω)–elliptic operator, it defines an automor-phism on ω⊥, whose inverse is called orthogonal Greenoperator of K. We aim here at studying the effect ofa perturbation of K on its orthogonal Green opera-tor. The perturbation here considered is performed byadding a self–adjoint and positive semi–definite opera-tor to K. As particular cases we consider the effect ofchanging the conductances on semi–definite Schodingeroperators on finite connected networks and the signlessLaplacian of connected bipartite graphs. The expres-sion obtained for the perturbed networks is explicitlygiven in terms of the orthogonal Green function of theoriginal network.

80

Page 83: CMR17 - ILAS 2013

Matrices and their Kirchhoff Graphs

Joseph Fehribach

Given a matrix with integer or rational elements, whatgraph or graphs represent this matrix in terms of thefundamental theorem of linear algebra? This talk willdefine the concept of a Kirchhoff or fundamental graphfor the matrix and will explain how such a graph rep-resents a matrix. A number of basic results pertainingto Kirchhoff graphs will be presented and a process forconstructing them will be discussed. Finally the talkwill conclude with an example from electrochemistry:the production of NaOH from NaCl. Kirchhoff graphshave their origin in electrochemical reaction networksand are closely related to reaction route graphs.

Locating eigenvalues of graphs

David Jacobs, Vilmar Trevisan

We present a linear-time algorithm that determines thenumber of eigenvalues, in any real interval, of the ad-jacency matrix A of a tree. The algorithm relies onSylvester’s Law of Inertia, and produces a diagonal ma-trix congruent to A + xI. Efficient approximation of asingle eigenvalue can be achieved by divide-and-conquer.

The algorithm can be adapted to other matrices, in-cluding Laplacian matrices of trees. We report on sev-eral applications of the algorithm, such as integralityof trees, distribution and bounds of Laplacian eigenval-ues, obtaining trees with extremal values of Laplacianenergy, and ordering trees by Laplacian energy.

Finally, we present a diagonalization procedure for ad-jacency matrices of threshold graphs, also leading toa linear-time algorithm for computing the number ofeigenvalues in a real interval. Applications include aformula for the smallest eigenvalue among thresholdgraphs of size n, a simplicity result, and a characteristicpolynomial construction. A similar diagonalization al-gorithm is obtained for the distance matrix of thresholdgraphs.

The Cycle Intersection Matrix and Applicationsto Planar Graphs

Caitlin Phifer, Woong Kook

Given a finite connected planar graph G with orientededges, we define the Cycle Intersection matrix, C(G),as follows. Let cii be the length of the cycle whichbounds finite face i, and cij be the (signed) number ofedges that cycles i and j have in common. We give anelementary proof that the determinant of this matrixis equal to the number of spanning trees of G. As anapplication, we compute the number of spanning treesof the grid graph in terms of Chebychev polynomials. Inaddition, we show an interesting connection between thedeterminant of C(G) to the Fibonacci sequence when Gis a triangulated n−gon. If time permits, we will alsodiscuss the spanning tree entropy for grid graphs.

Positive Semidefinite Propagation Time

Nathan Warnberg

Positive semidefinite (PSD) zero forcing on a simpleundirected graphG is based on the following color changerule: Let B ⊆ V (G) be colored black and the rest ofthe vertices be colored white. Let C1, C2, . . . , Ck be theconnected components of G − B. For any black ver-tex b ∈ B ∪ Ci that has exactly one white neighborw ∈ B∪Ci, change the color of w to black. A minimumPSD zero forcing set is a set of black vertices of mini-mum cardinality that color the entire graph black. ThePSD propagation time of a PSDZFS B of graph G isthe minimum number of iterations of the color changerule needed to force all vertices of G black, starting withthe vertices in B black. Minimum and maximum PSDpropagation time are taken over all minimum PSD zeroforcing sets. Extreme propagation times, |G|−1, |G|−2,will be discussed as well as graph family results.

Contributed Session on Matrix Completion Prob-lems

Matrix Completion Problems

Gloria Cravo

In this talk we present several results in the area ofthe so-called Matrix Completion Problems, such as thedescription of the eigenvalues or the characteristic poly-nomial of a square matrix when some of its entries areprescribed. We give special emphasis to the situationwhere the matrix is partitioned into several blocks andsome of these blocks are prescribed.

Ray Nonsingularity of Cycle Chain Matrices

Yue Liu, Haiying Shan

Ray nonsingular (RNS) matrices are a generalizationof sign nonsingular (SNS) matrices from the real fieldto complex field. The problem of how to recognize raynonsingular matrices is still open. In this paper, wegive an algorithm to recognize the ray nonsingularity forthe so called “cycle chain matrices”, whose associateddigraphs are of certain simple structure. Furthermore,the cycle chain matrices that are not ray nonsingularare classified into two classes according to whether ornot they can be majorized by some RNS cycle chainmatrices, and the case when they can is characterized.

Partial matrices whose completions all have thesame rank

James McTigue

A partial matrix over a field F is a matrix whose entriesare either elements of F or independent indeterminates.A completion of such a partial matrix is obtained byspecifying values from F for the indeterminates. We

81

Page 84: CMR17 - ILAS 2013

determine the maximum possible number of indeter-minates in an m × n partial matrix (m 6 n) whosecompletions all have a particular rank r, and we fullydescribe those examples in which this maximum is at-tained, without any restriction on the field F.

We also describe sufficient conditions for an m×n par-tial matrix whose completions all have rank r to containan r × r partial matrix whose every completion is non-singular.

This work extends some recent results of R. Brualdi, Z.Huang and X. Zhan.

Dobrushin ergodicity coefficient for Markov op-erators on cones, and beyond

Stephane Gaubert, Zheng QU

The analysis of classical consensus algorithms relies oncontraction properties of adjoints of Markov operators,with respect to Hilbert’s projective metric or to a re-lated family of seminorms (Hopf’s oscillation or Hilbert’sseminorm). We generalize these properties to abstractconsensus operators over normal cones, which includethe unital completely positive maps (Kraus operators)arising in quantum information theory. In particular,we show that the contraction rate of such operators,with respect to the Hopf oscillation seminorm, is givenby an analogue of Dobrushin’s ergodicity coefficient.We derive from this result a characterization of the con-traction rate of a non-linear flow, with respect to Hopf’soscillation seminorm and to Hilbert’s projective metric.

The Normal Defect of Some Classes of Matrices

Ryan Wasson, Hugo Woerdeman

An n × n matrix A has a normal defect of k if thereexists an (n+ k)× (n+ k) normal matrix Aext with Aas a leading principal submatrix and k minimal. In thispaper we compute the normal defect of a special classof 4× 4 matrices, namely matrices whose only nonzeroentries lie on the superdiagonal, and we provide detailsfor constructing minimal normal completion matricesAext. We also prove a result for a related class of n×nmatrices. Finally, we present an example of a 6×6 blockdiagonal matrix having the property that its normaldefect is strictly less than the sum of the normal defectsof each of its blocks, and we provide sufficient conditionsfor when the normal defect of a block diagonal matrixis equal to the sum of the normal defects of each of itsblocks.

Contributed Session on Matrix Equalities andInequalities

Rank One Perturbations of H-Positive Real Ma-trices

Jan Fourie, Gilbert Groenewald, Dawid Janse van Rensburg,

Anre Ran

We consider a generic rank-one structured perturba-tion on H-positive real matrices. The complex case istreated in general, but the main focus for this articleis the real case where something interesting occurs ateigenvalue zero and even size Jordan blocks. GenericJordan structures of perturbed matrices are identified.

Sylvester equation and some special applications

Hosoo Lee, Sejong Kim

In this article we mainly consider the following matrixequation for unknown m× n matrix X

AN−1X +AN−2XB + · · ·+AXBN−2 +XBN−1 = C,(3)

where A and B are m × m and n × n matrices, re-spectively. First, we review not only the necessary andsufficient condition for the existence and uniqueness ofthe solution of Sylvester equation, but also the specificforms of the unique solution depending on the condi-tions of the spectra. Shortly we also study the specialtype of matrix equation associated with Lyapunov oper-ator and see its unique solution derived from the knownresult of Sylvester equation. Finally we show that theequation (1) has a solution X = AY − Y B if Y is asolution of the Sylvester equation X = ANX − XBN .Furthermore, we have the specific form of the uniquesolution of (1).

Geometry of Hilbert’s and Thompson’s metricson cones

Bas Lemmens, Mark Roelands

It is well-known that the cone of positive definite Hermi-tian matrices can be equipped with a Riemannian met-ric. There are two other important metrics on this cone,Hilbert’s (projective) metric and Thompson’s metric.These two metrics can be defined on general cones andtheir geometric properties are very interesting. For ex-ample, Hilbert’s metric spaces are a natural non–Rie-mannian generalization of hyperbolic space. In thistalk I will discuss some of the geometric properties ofHilbert’s and Thompson’s metrics. In particular, I willexplain the structure of the unique geodesics and thepossibility of quasi-isometric embeddings of these met-ric spaces into finite dimensional normed spaces.

Refining some inequalities involving quasi–ari-thmetic means

Jadranka Micic Hot, Josip Pecaric, Kemal Hot

Let A and B be unital C∗-algebras on H and K re-spectively. Let (xt)t∈T be a bounded continuous fieldof self-adjoint elements in A with spectra in an interval[m,M ], defined on a locally compact Hausdorff spaceT equipped with a bounded Radon measure µ. Let(φt)t∈T be a unital field of positive linear mappingsφt : A → B. We observe the quasi-arithmetic opera-

82

Page 85: CMR17 - ILAS 2013

tor mean

Mϕ(x,φ) = ϕ−1

(∫

T

φt (ϕ(xt)) dµ(t)

),

where ϕ ∈ C([m,M ]) is a strictly monotone function.This mean is briefly denoted by Mϕ. We denote mϕ

andMϕ the lower and upper bound of the meanMϕ, re-spectively, and ϕm =minϕ(m), ϕ(M), ϕM =maxϕ(m),ϕ(M).

The purpose of this presentation is to consider refiningconverse inequalities involving quasi-arithmetic means.To obtain these results, we use refined converses of Jensen’sinequality under the above assumptions. Applying theobtained results we further refine selected inequalitiesinvolving power means. In the end, a few numerical ex-amples are given for various types of converse inequali-ties.

So, for example, we present the following ratio typeconverse order:

Mψ 6 C(mϕ,Mϕ, ϕm, ϕM , ψ ϕ−1, ψ−1, δmx)Mϕ

6 C(mϕ,Mϕ, ϕm, ϕM , ψ ϕ−1, ψ−1, 0)Mϕ

6 C(ϕm, ϕM , ψ ϕ−1, ψ−1)Mϕ,

for all strictly monotone functions ϕ,ψ ∈ C([m,M ]),such that ϕ > 0 on [m,M ], ψ ϕ−1 is convex and ψ−1

is operator monotone, where we used the following ab-breviations: δ ≡ δ(m,M,ψ, ϕ) := ψ(m) + ψ(M)− 2ψ ϕ−1

(ϕ(m)+ϕ(M)

2

),

mx is the lower bound of the operator

x(m,M,ϕ,x,φ) :=

121K− 1

|ϕ(M)−ϕ(m)|∫Tφt(∣∣∣ϕ(xt)− ϕ(m)+ϕ(M)

21H

∣∣∣)dµ(t)

and C(n,N,m,M, f, g, c) := maxn6z6N

g(

f(M)−f(m)M−m

z+Mf(m)−mf(M)

M−m−c)

fg(z)

,

C(m,M, f, g) := C(m,M,m,M, f, g, 0).

Contributed Session on Matrix Pencils

Symmetric Fiedler Pencils with Repetition asStrong Linearizations for Symmetric Matrix Poly-nomials.

Maria Isabel Bueno Cachadina, Kyle Curlett, SusanaFurtado

Strong linearizations of an n × n matrix polynomialP (λ) =

∑ki=0Aiλ

i that preserve some structure of P (λ)are relevant in many applications. We characterize allthe pencils in the family of the Fiedler pencils with repe-tition associated with a square matrix polynomial P (λ)that are symmetric when P (λ) is. The pencils in thisfamily, introduced by Antoniou and Vologiannidis, arecompanion forms, that is, pencils whose matrix coeffi-cients are block matrices whose blocks are of form 0n,±In, or ±Ai. When some nonsingularity conditions are

satisfied, these pencils are strong linearizations of P (λ).In particular, when P (λ) is a symmetric polynomial andthe coefficients Ak and A0 are nonsingular, our familystrictly contains the basis for the space DL(P ) definedand studied by Mackey, Mackey, Mehl, and Mehrmann.

Orbit closure hierarchies of skew-symmetric ma-trix pencils

Andrii Dmytryshyn, Stefan Johansson, Bo Kagstrom

The reduction of a complex skew-symmetric matrix pen-cil A−λB to any canonical form under congruence is anunstable operation: both the canonical form and the re-duction transformation depend discontinuously on theentries of A − λB. Thus it is important to know thecanonical forms of all such pencils that are arbitrarilyclose to A−λB, or in another words to know how smallperturbations of a skew-symmetric matrix pencil maychange its canonical form under congruence.

We solve this problem by deriving the closure hierar-chy graphs (i.e., stratifications) of orbits and bundlesof skew-symmetric matrix pencils. Each node of such agraph represents the orbit (or the bundle) and each edgerepresents the cover/closure relation (i.e., there is a di-rected path from the node A−λB to the node C−λD ifand only if A−λB can be transformed by an arbitrarilysmall perturbation to a matrix pencil whose canonicalform is C − λD).

In particular, we prove that for two complex skew–sym-metric matrix pencils A− λB and C − λD there existsa sequence of nonsingular matrices Rn, Sn such thatRn(C − λD)Sn → (A − λB) if and only if there ex-ists a sequence of nonsingular matrices Wn such thatWTn (C−λD)Wn → (A−λB). This result can be inter-

preted as a generalization (or a continuous counterpart)of the fact that two skew-symmetric matrix pencils areequivalent if and only if they are congruent.

Skew-symmetric Strong Linearizations for Skew-symmetric Matrix Polynomials obtained fromFiedler Pencils with Repetition

Susana Furtado, Maria Isabel Bueno

Let F be a field with characteristic different from 2and P (λ) be a matrix polynomial of degree k whosecoefficients are matrices with entries in F. The ma-trix polynomial P (λ) is said to be skew-symmetric ifP (λ) = −P (λ)T .

In this talk we present nk× nk skew-symmetric matrixpencils associated with n × n skew-symmetric matrixpolynomials P (λ) of degree k > 2 whose coefficient ma-trices have entries in F. These pencils are of the formSL, where L is a Fiedler pencil with repetition and S isa direct sum of blocks of the form In or −In. Under cer-tain conditions, these pencils are strong linearizationsof P (λ). These linearizations are companion forms inthe sense that, if their coefficients are viewed as k-by-kblock matrices, each n×n block is either 0, ±In, or ±Ai,

83

Page 86: CMR17 - ILAS 2013

where Ai, i = 0, . . . , k, are the coefficients of P (λ). Thefamily of Fiedler pencils with repetition was introducedby S. Vologiannidis and E. N. Antoniou (2011).

New bounds for roots of polynomials from Fiedlercompanion matrices

Froilan M. Dopico, Fernando De Teran, Javier Perez

In 2003 Miroslav Fiedler introduced a new family ofcompanion matrices of a given monic scalar polyno-mial p(z). This family of matrices includes as partic-ular cases the classical Frobenius companion forms ofp(z). Every Fiedler matrix shares with the Frobeniuscompanion forms that its characteristic polynomial isp(z), that is, the roots of p(z) are the eigenvalues of theFiedler matrices. Frobenius companion matrices havebeen used extensively to obtain upper and lower boundsfor the roots of a polynomial. In this talk, we derivenew upper and lower bounds of the absolute values ofthe roots of a monic polynomial using Fiedler matrices.These bounds are based in the fact that if λ is an eigen-value of a nonsingunlar matrix A and ‖ · ‖ is a submul-tiplicative matrix norm, then ||A−1||−1

6 |λ| 6 ||A||.For this purpose, we present explicit formulae for the 1-norm and ∞-norm of any Fiedler matrix and its inverse.These formulas will be key to obtain new upper andlower bounds for the roots of monic polynomials. In ad-dition, we compare these new bounds with the boundsobtained from the Frobenius companion forms and weshow that, for many polynomials, the lower bounds ob-tained from the inverses of Fiedler matrices improve thelower bounds obtained from Frobenius companion formswhen |p(0)| < 1.

Linearizations of Matrix Polynomials in Bern-stein Basis

D. Steven Mackey, Vasilije Perovic

Scalar polynomials in the Bernstein basis are well stud-ied due to their importance in applications, most no-tably in computer-aided geometric design. But, matrixpolynomials expressed in the Bernstein basis have onlyrecently been studied. Two considerations led us tosystematically study such matrix polynomials: the in-creasing use of non-monomial bases in practice, and thedesirable numerical properties of the Bernstein basis.For a matrix polynomial P (λ), the classical approach tosolving the polynomial eigenvalue problem P (λ)x = 0is to first convert P into a matrix pencil L with thesame finite and infinite elementary divisors, and thenwork with L. This method has been extensively de-veloped for polynomials P expressed in the monomialbasis. But what about when P (λ) is in the Bernsteinbasis? It is important to avoid reformulating such a Pinto the monomial basis, since this change of basis ispoorly conditioned, and can therefore introduce numer-ical errors that were not present in the original problem.Using novel tools, we show how to work directly withany P (λ) expressed in the Bernstein basis to generatelarge families of linearizations that are also expressed inthe Bernstein basis. Connections with low bandwidth

Fiedler-like linearizations, which could have numericalimpact, are also established. We also illustrate howexisting structure preserving eigenvalue algorithms forstructured pencils in the monomial basis can be adaptedfor use with structured pencils in the Bernstein basis.Finally, we note that several results in the literature arereadily obtained by specializing our techniques to thecase of scalar polynomials expressed in the Bernsteinbasis.

Perturbations of singular hermitian pencils.

Michal Wojtylak, Christian Mehl, Volker Mehrmann

Low rank perturbations of the pencils A + λE, withhermitian-symmetric A and E, are studied. The maininterest lies in the case when the pencil is singular, alsoa distance of an arbitrary pencil to a singular pencil isstudied.

Contributed Session on Nonnegative MatricesPart 1

About negative zeros of entire functions, totallynon-negative matrices and positive definite quadraticforms

Prashant Batra

Holtz and Tyaglov in SIAM Review 54(2012) proposedrecently a number of results characterising reality /negativity / interlacing of polynomial roots via totallynon-negative, infinite matrices. We show how to re-place such characterisations by structurally similar, fi-nite equivalents, and subsequently extend the new crite-ria naturally from polynomials to entire functions withonly real zeros. The extension relies on a three-foldconnection between mapping properties of meromorphicfunctions, certain Mittag-Leffler expansions and non-negative Hankel determinants. We derive from this con-nection a simple, concise proof of an extended Lienard-Chipart criterion for polynomials and entire functionsof genus zero. Using some infinite matrix H for a realentire function f of genus zero, we derive a new criterionfor zero-location exclusively on the negative real axis.We show moreover that in the polynomial case (wheredeg f = n) the first n consecutive principal minors ofeven order already characterize total non-negativity ofH.

The nonnegative inverse eigenvalue problem

Anthony Cronin

In this talk we investigate the nonnegative inverse eigen-value problem (NIEP). This is the problem of charac-terizing all possible spectra of nonnegative n × n ma-trices. In particular, given a list of n complex numbersσ = (λ1, λ2, ..., λn), can we find necessary and sufficientconditions on the list σ so that it is the list of eigenval-ues of an entry-wise n × n nonnegative matrix A. The

84

Page 87: CMR17 - ILAS 2013

problem remains unsolved and a complete solution isknown only for n 4. I will give a brief outline of thehistory and motivation behind the NIEP and discussthe more important results to date. I will then givea survey of the main results from my PhD thesis andcurrent work.

An extension of the dqds algorithm for totallynonnegative matrices and the Newton shift

Akiko Fukuda, Yusaku Yamamoto, Masashi Iwasaki,Emiko Ishiwata, Yoshimasa Nakamura

The recursion formula of the qd algorithm is known asbeing just equal to the integrable discrete Toda equa-tion. In this talk, we consider the discrete hungry Toda(dhToda) equation, which is a generalization of the dis-crete Toda equation. We show that the dhToda equa-tion is applicable for computing eigenvalues of a classof totally nonnegative (TN) matrices where all of itsminors are nonnegative. The algorithm is named thedhToda algorithm, and can be regarded as an exten-sion of the dqds algorithm. The differential form andthe origin shift are incorporated into the dhToda algo-rithm. It is also shown that the order of convergence forthe dhToda algorithm with Newton shift is quadratic.

An Iterative Method for detecting Semi-positiveMatrices

Keith Hooper, Michael Tsatsomeros

Definition: A Semi-positive matrix, A, is a matrix suchthat there exists x > 0 satisfying Ax > 0. This classof matrices generalizes the well studied P matrices. Itis known that determining whether or not a given ma-trix is a P matrix is co-NP complete. One of the ques-tions we answer is whether or not this difficulty alsoarises in determining whether or not a matrix is Semi-positive. The simple definition of a Semi-positive ma-trix also has geometric interpretations, which will bementioned briefly. In the talk, an iterative for detectingwhether or not a matrix is Semi-positive is discussed.

On Inverse-Positivity of Sub-direct Sums of Ma-trices

Shani Jose, Sivakumar K. C.

Sub-direct sum, a generalization of the direct sum andthe normal sum of matrices, was proposed by Fallatand Johnson. This concept has applications in ma-trix completion problems and overlapping subdomainsin domain decomposition methods. The definition ofk-subdirect sum is given as follows:

Let A =

(D EF G

)and B =

(P QS T

), where D ∈

R(m−k)×(m−k), E ∈ R(m−k)×k, F ∈ Rk×(m−k), Q ∈Rk×(n−k), S ∈ R(n−k)×k, T ∈ R(n−k)×(n−k) and G,P ∈Rk×k. The k-subdirect sum of A and B is denoted as

A⊕k B and is defined as

A⊕k B =

D E 0F G+ P Q0 S T

. (4)

One of the main questions in connection with the sub-direct sum is to investigate whether certain special classesof matrices are closed with respect to the sub-directsum operation. In this talk, we consider the class of allinverse-positive matrices. We provide certain conditionsunder which the k-subdirect sum of inverse-positive ma-trices is inverse-positive. We also present sufficient con-ditions for the converse to hold. In other words, if amatrix is inverse-positive, we prove that it can be writ-ten as a k-subdirect sum of inverse-positive matrices, inthe presence of certain assumptions. We consider thecase k 6= 1 as for the 1-subdirect sum, these results areknown. We tackle the problem in its full generality byassuming that the individual summand matrices havenon-trivial blocks.

Matrix roots of positive and eventually positivematrices.

Pietro Paparella, Judith McDonald, Michael Tsatsomeros

Eventually positive matrices are real matrices whosepowers become and remain strictly positive. As such,eventually positive matrices are a fortiori matrix rootsof positive matrices, which motivates us to study thematrix roots of primitive matrices. Using classical ma-trix function theory and Perron-Frobenius theory, wecharacterize, classify, and describe in terms of the realJordan canonical form the pth-roots of eventually pos-itive matrices.

Contributed Session on Nonnegative MatricesPart 2

Product distance matrix of a tree with matrixweights

Ravindra Bapat, Sivaramakrishnan Sivasubramanian

Let T be a tree with n vertices and let k be a positiveinteger. Let each edge of the tree be assigned a weight,which is a k × k matrix. The product distance fromvertex i to vertex j is defined to be the identity matrixof order k if i = j, and the product of the matricescorresponding to the ij-path, otherwise. The distancematrix of T is the nk × nk block matrix with its ij-block equal to the product distance from i to j. Weobtain formulas for the determinant and the inverse ofthe product distance matrix, generalizing known resultsfor the case k = 1.

Some monotonicity results for Euclidean distancematrices

Hiroshi Kurata

85

Page 88: CMR17 - ILAS 2013

In this talk, some monotonicity results for Euclideandistance matrices (EDMs) are given. We define a partialordering on the set of EDMs and derive several inequal-ities for the eigenvalues and the set of configuration ofthe EDMs. The talk can be viewed as some further re-sults of Kurata and Sakuma (2007, LAA) and Kurataand Tarazaga (2012, LAA).

On the CP-rank and the DJL Conjecture

Naomi Shaked-Monderer, Imanuel Bomze, Florian Jarre,Werner Schachinger, Abraham Berman

A matrix A is completely positive if it can be factoredas A = BBT , where B is a nonnegative, not necessar-ily square, matrix. The minimal number of columnsin such B is the cp-rank of A. Finding a sharp upperbound on the cp-rank of n× n completely positive ma-trices is one of the basic open questions in the studyof completely positive matrices. According to the DJLconjecture (Drew, Johnson and Loewy, 1994), for n > 4this bound may be ⌊n2/4⌋. The best known bound forn > 5 has been until now the BB bound n(n+1)/2− 1(due to Barioli and Berman, 2003).

We show that the maximum cp-rank of n×n completelypositive matrices is attained at a positive definite matrixon the boundary of the cone of n × n completely posi-tive matrices, thus answering a long standing question.This new result is used to prove the DJL conjecture inthe case n = 5, completing the work of Loewy and Tam(2003) on that case. For n > 5 we show that the max-imum cp-rank of n × n completely positive is strictlysmaller than the BB bound.

Based on joint works with Imanuel M. Bomze, FlorianJarre, Werner Schachinger and Abraham Berman.

Binary factorizations of the matrix of all ones

Maguy Trefois, Paul Van Dooren, Jean-Charles Del-venne

We consider the factorization problem of the square ma-trix of all ones where the factors are square matricesand constrained to have all elements equal to 0 or 1.We show that under some conditions on the rank of thefactors, these are essentially De Bruijn matrices. Morespecifically, we are looking for the n × n binary solu-tions of

∏mi=1Ai = In, where In is the n × n matrix

with all ones and in particular, we are investigating thebinary solutions to the equation Am = In. Our mainresult states that if we impose all factors Ai to be pi-regular and such that all their products commute, thenrank(Ai) = n/pi if and only if Ai is essentially a DeBruijn matrix. In particular, we prove that if A is abinary m-th root of In, then A is p-regular. Moreover,rank(A) = n/p if and only if A is essentially a De Bruijnmatrix.

Nonnegative Eigenvalues and Total Orders

Rohan Hemasinha, James Weaver

A solution to Problem 49-4 from “IMAGE” Issue Num-ber 49, Fall 2012 and proposed by Rajesh Pereira isoutlined. This problem is as follows: Let A be an n×nreal matrix. Show that the following are equivalent: (a)All the eigenvalues of A are real and nonnegative. (b)There exists a total order @ on Rn (partial order whereany two vectors are comparable) which is preserved byA (i.e. x belongs to Rn and x @ 0 implies Ax @ 0)and which makes (Rn,@) an ordered vector space. Wethen explore whether the total order is unique up to anisomorphism.

Contributed Session on Numerical Linear Alge-bra Part 1

Block Gram-Schmidt Downdating

Jesse Barlow

The problem of deleting multiple rows from a Q-R fac-torization, called the block downdating problem, is con-sidered. The problem is important in the context ofsolving recursive least squares problem where observa-tions are added or deleted over time.

The problem is solved by a block classical Gram-Schmidtprocedure with re-orthogonalization. However, two heuris-tics are needed for the block downdating problem thatare not needed for the cases of deleting a single row; oneis a condition estimation heuristic, the other is a methodfor constructing a one left orthogonal matrix that is or-thogonal to another. The difficulties that require theseheuristic tend to occur when the block downdating op-eration changes the rank of the matrix.

Structured eigenvalue condition numbers for pa-rameterized quasiseparable matrices

Froilan Dopico

It is well-known that n×n quasiseparable matrices canbe represented in terms of O(n) parameters or genera-tors, but that these generators are not unique. In thistalk, we present eigenvalue condition numbers of qua-siseparable matrices with respect to tiny relative pertur-bations of the generators and compare these conditionnumbers for different specific set of generators of thesame matrix.

Explicit high-order time stepping based on com-ponentwise application of asymptotic block Lanc-zos iteration

James Lambers

This talk describes the development of explicit timestepping methods for linear and nonlinear PDEs that

86

Page 89: CMR17 - ILAS 2013

are specifically designed to cope with the stiffness of thesystem of ODEs that results from spatial discretization.As stiffness is caused by the contrasting behavior of cou-pled components of the solution, it is proposed to adopta componentwise approach in which each coefficient ofthe solution in an appropriate basis is computed usingan individualized approximation of the solution oper-ator. This is accomplished by Krylov subspace spec-tral (KSS) methods, which use techniques from “ma-trices, moments and quadrature” to approximate bi-linear forms involving functions of matrices via blockGaussian quadrature rules. These forms correspond tocoefficients with respect to the chosen basis of the ap-plication of the solution operator of the PDE to thesolution at an earlier time. In this talk, it is shownthat the efficiency of KSS methods can be substantiallyenhanced through the prescription of quadrature nodeson the basis of asymptotic analysis of the recursion co-efficients produced by block Lanczos iteration, for eachFourier coefficient as a function of frequency. The effec-tiveness of this idea is illustrated through numerical re-sults obtained from the application of the modified KSSmethods to diffusion equations and wave equations, aswell as nonlinear equations through combination withexponential propagation iterative (EPI) methods.

Gauss-Jordan elimination method for comput-ing outer inverses

Marko Petkovic, Predrag Stanimirovic

This paper deals with the algorithm for computing outerinverse with prescribed range and null space, based onthe choice of an appropriate matrixG and Gauss-Jordanelimination of the augmented matrix [G | I]. The ad-vantage of such algorithms is the fact that one cancompute various generalized inverses using the sameprocedure, for different input matrices. In particular,we derive representations of the Moore-Penrose inverse,the Drazin inverse as well as 2, 4 and 2, 3-inverses.Numerical examples on different test matrices are pre-sented, as well as the comparison with well-known meth-ods for generalized inverses computation.

Contributed Session on Numerical Linear Alge-bra Part 2

Generalized Chebyshev polynomials

Clemente Cesarano

we present some generalizations of first and second kindChebyshev polynomials by using the operational rela-tions stated for Hermite polynomials. we derive integralrepresentatons and new identity for two-variable Cheby-shev polynomials. finally, we also show some relationsfor generalized Gegenbauer polynomials.

CP decomposition of Partial symmetric tensors

Na Li, Carmeliza Navasca

We study the CANDECOMP/PARAFAC decomposi-tion on partial symmetric tensor and present a new al-ternating way to solve the factor matrices. The originalobjective function is reformulated and the factor ma-trices involving in the symmetric modes are updatedcolumn by column. We also provide the numerical ex-amples to show that our method is better than the clas-sical alternating least squares method in terms of theCPU time and number of iterations.

On Moore-Penrose Inverse based Image Restora-tion

Surya Prasath, K. C. Sivakumar, Shani Jose, K. Pala-niappan

We consider the motion deblurring problem from imageprocessing. A linear motion model along with randomnoise is assumed for the image formation process. Wesolve the inverse problem of estimating the latent im-age using the Moore-Penrose (MP) inverse of the blurmatrix. Traditional MP inverse based schemes can leadto ringing artifacts due to approximate computationsinvolved. Here we utilize the classical Grevile’s parti-tioning based method [T. N. E. Grevile, Some applica-tions of the pseudoinverse of matrix, SIAM Review 3(1),1522, 1960.] for computing the MP inverse of the de-blurring operator along with a Tikhonov regularizationfor the denoising step. We provide a review of MP in-verse based image processing techniques and show someexperimental results in image restoration.

Construction of all discrete orthogonal and q-orthogonal classical polynomial sequences usinginfinite matrices and Pincherle derivatives

Luis Verde-Star

In the paper: L. Verde-Star, Characterization and con-struction of classical orthogonal polynomials using amatrix approach, Linear Algebra Appl. (2013), http://dx.doi.org/10.1016/j.laa.2013.01.014, we obtainedexplicit formulas for the recurrence coefficients and theBochner differential operator of all the classical orthogo-nal polynomial sequences. In this talk we present someextensions and simplifications of our previous resultsthat allow us to find, in an explicit way, all the dis-crete orthogonal and q-orthogonal classical polynomialsequences.

We use Pincherle derivatives to solve a system of two in-finite matrix equations of the form LA = AX, Y AE =AB, where A is the matrix associated with an orthogo-nal sequence of polynomials pk(x), L is the tridiagonalmatrix of recurrence coefficients, X is the representa-tion of multiplication by the variable x, Y is a diagonalmatrix, B represents a Bochner-type operator, and Eis the advance operator.

The Pincherle derivative of a matrix Z is defined byZ′ = XZ − ZX. Using Pincherle derivatives we show

87

Page 90: CMR17 - ILAS 2013

that BE−1 and X satisfy a simple polynomial equationthat must be satisfied also when BE−1 is replaced byY and X is replaced by L. All the entries of the ma-trices are found in terms of 4 of the initial recurrencecoefficients and a scaling parameter that acts on Y andB.

An Algorithm for the Nonlinear Eigenvalue Prob-lem based on the Contour Integral

Yusaku Yamamoto

We consider the solution of the nonliear eigenvalue prob-lem A(z)x = 0, where A(z) is an n × n matrix whoseelements are analytical functions of a complex parame-ter z, and x is an n-dimensional vector. In this talk, wefocus on finding all the eigenvalues that lie in a specifiedregion on the complex plane.

Conventional approaches for the nonlinear eigenvalueproblem include multivariate Newton’s method and itsmodifications, nonlinear Arnoldi methods and nonlinearJacobi-Davidson methods. However, Newton’s methodrequires a good initial estimate for stable convergence.Nonlinear Arnoldi and Jacobi-Davidson methods are ef-ficient for large sparse matrices, but in general, theycannot guarantee that all the eigenvalues in a specifiedregion are obtained.

In our approach, we construct a complex function g(z)that has simple poles at the roots of the nonlinear eigen-value problem and is analytical elsewhere. This can beachieved by letting f(z) = det(A(z)) and setting g(z)as

g(z) ≡ f ′(z)

f(z)= Tr

[(A(z))−1 dA(z)

dz

]. (5)

Then, by computing the complex moments through con-tour integration along the boundary of the specified re-gion, we can locate all the poles, or eigenvalues, usingthe method of Kravanja et al. The multiplicity of eacheigenvalue can also be obtained from the 0th moment.Another advantage of our approach is its coarse-grainparallelism. This is because the main computationaltask is evaluation of g(z) at sample points along theintegration path and each evaluation can be done inde-pendently.

A possible difficulty in this approach is that the evalu-ation of g(z) can be costly, because it involves matrixinverse (A(z))−1 and its product with another matrix.To solve this problem, we developed an efficient algo-rithm to compute g(z) by extending Erisman and Tin-

ney’s algorithm to compute Tr[A−1

]. Given dA(z)

dzand

the LU factors of A(z), our algorithm can compute g(z)with only twice the work needed to compute the LU de-composition of A(z). This is very efficient when A(z) isa large sparse matrix.

We implemented our algorithm and evaluated its per-formance on the Fujitsu HPC2500 supercomputer. Forrandom matrices of size from 500 to 2,000 with expo-nential dependence on z, our algorithm was able to findall the eigenvalues within the unit circle centered at theorigin with high accuracy. The parallel speedup wasnearly linear, as can be expected from the structure of

the algorithm.

Contributed Session on Numerical Linear Alge-bra Part 3

A Geometrical Approach to Finding Multivari-ate Approximate LCMs and GCDs

Kim Batselier, Philippe Dreesen, Bart De Moor

In this talk we present a new approach to numericallycompute an approximate least common multiple (LCM)and an approximate greatest common divisor (GCD)of two multivariate polynomials. This approach usesthe geometrical notion of principal angles between sub-spaces whereas the main computational tools are Im-plicitly Restarted Arnoldi Iterations, the SVD and theQR decomposition. Upper and lower bounds are de-rived for the largest and smallest singular values of thehighly structured Macaulay matrix. This leads to anupper bound on its condition number and an upperbound on the 2-norm of the product of two multivariatepolynomials. Numerical examples illustrate the effec-tiveness of the proposed approach.

Global Krylov subspace methods for computingthe meshless elastic polyharmonic splines

Abderrahman Bouhamidi

Meshless elastic polyharmonic splines are useful for theapproximation of vector fields from scattered data pointswithout using any mesh nor grid. They are based on aminimization of certain energy in an appropriate func-tional native space. A such energy is related to thestrain tensor constraint and to the divergence of thevector field. The computation of such splines leads toa large linear system. In this talk, we will discuss howto transform a such linear system to a general Sylvestermatrix equation. So, we will use global Krylov subspacemethods to compute approximations to the solution.

The Stationary Iterations Revisited

Xuzhou Chen, Xinghua Shi, Yimin Wei

In this talk, we first present a necessary and sufficientconditions for the weakly and strongly convergence ofthe general stationary iterations x(k+1) = Tx(k)+c withinitial iteration matrix T and vectors c and x(0). Thenwe apply these general conditions and present conver-gence conditions for the stationary iterations for solvingsingular linear system Ax = b. We show that our con-vergence conditions are weaker and more general thanthe known results.

Derivatives of Functions of Matrices

Sonia Carvalho, Pedro Freitas

88

Page 91: CMR17 - ILAS 2013

There have been recent results concerning directionalderivatives, of degree higher than one, for the deter-minant, the permanent, the symmetric and the anti-symmetric power maps, by R. Bhatia, P. Grover andT. Jain. We present generalizations of these results forimmanants and other symmetric powers of a matrix.

Matrix-Free Methods for Verifying ConstrainedPositive Definiteness and Computing Directionsof Negative Curvature

William Morrow

In this talk we review several methods for verifying con-strained positive definiteness of a N×N matrixH: posi-tive definiteness over a linear subspace C of dimension L.C is often defined through the null space of someM×Nmatrix A (M < N , N = M + L). A related problemconcerns computing a direction of negative curvature dfor H in C, defined as any d ∈ C such that d⊤Hd < 0.This problem is primarily relevant in optimization the-ory, but also closely related to more general saddle-pointsystems; the computation of a direction of negative cur-vature, should one exist, is particularly important forsecond-order convergent optimization methods for non-convex problems.

We present several contributions. First, existing meth-ods based on Gaussian Elimination or inertia-revealingfactorization require knowing H explicitly and do notreveal a direction of negative curvature. We presentthree new methods based on Cholesky factorization, or-thogonalization, and (projected) Conjugate Gradientsthat are “matrix-free” in the sense that in place of Hthey require a routine to compute Hv for any v and re-veal directions of negative curvature, should constrainedpositive definiteness not hold. Partitioned implementa-tions that take advantage of the Level-3 BLAS routinesare given and compared to existing methods using stateof the art software (including LAPACK, LUSOL, HSL, andSuiteSparse routines). Second, we provide a variety ofexamples based on matrices from the UF sparse matrixcollection as well as large-scale optimization test prob-lems in which the appropriate H matrix is not knownexactly. These results show that (a) different methodsare better suited for different problems, (b) the abil-ity to check the constrained positive definiteness of H(and compute a direction of negative curvature) whenonly products Hv are known is valuable to optimiza-tion methods, and (c) that numerical errors can leadto incorrect rejection of constrained positive definite-ness. Third, we examine numerical accuracy of differ-ent methods, including cases where H itself or productsHv may contain errors. Practical problems must befounded on an understanding of how errors in comput-ing null space bases and H (or products Hv) obscureinferences of the constrained positive definiteness of the“true” H. Our analysis is focused on practical meth-ods that can certify inferences regarding the constrainedpositive definiteness of the “true” H.

A practical rational Krylov algorithm for solving

large-scale nonlinear eigenvalue problems

Roel Van Beeumen, Karl Meerbergen, Wim Michiels

In a previous contribution, we introduced the Newtonrational Krylov method for solving the nonlinear eigen-value problem (NLEP):

A(λ)x = 0.

Now, we present a practical two-level rational Krylovalgorithm for solving large-scale NLEPs. At the firstlevel, the large-scale NLEP is projected yielding a smallNLEP at the second level. Therefore, the method con-sists of two nested iterations. In the outer iteration, weconstruct an orthogonal basis V and project A(λ) ontoit. In the inner iteration, we solve the small NLEPA(λ)x = 0 with A(λ) = V ∗A(λ)V . We use poly-

nomial interpolation of A(λ) resulting, after lineariza-tion, in a generalized linear eigenvalue problem witha companion-type structure. The interpolating matrixpolynomial is connected with the invariant pairs in sucha way it allows locking of invariant pairs and implicitrestarting of the algorithm. We also illustrate the methodwith numerical examples and give a number of scenarioswhere the method performs very well.

Contributed Session on Numerical Range andSpectral Problems

An exact method for computing the minimalpolynomial

Christopher Bailey

We develop a method to obtain the minimal polynomialfor any matrix. Our methods uses exact arithmetic andsimplifies greatly when applied to lower Hessenberg ma-trices. Although numerical considerations are not con-sidered, the algorithm may be easily implemented to benumerically efficient.

Inclusion theorems for pseudospectra of blocktriangular matrices

Michael Karow

The ǫ-pseudospectrum of A ∈ Cn×n, denoted by σǫ(A),is the union of the spectra of the matrices A+E, whereE ∈ Cn×n and ‖E‖2 6 ǫ. In this talk we consider in-clusion relations of the form σf(ǫ)(A11) ∪ σf(ǫ)(A22) ⊆

σǫ

([A11 A12

0 A22

])⊆ σg(ǫ)(A11)∪σg(ǫ)(A22).We derive

formulae for f(ǫ) and g(ǫ) in terms of sepλ(A11, A22)and ‖R‖2, where R is the solution of the Sylvester equa-tion A11R−RA22 = A12.

Connectedness, Hessians and Generalized Nu-merical Ranges

Xuhua Liu, Tin-Yau Tam

89

Page 92: CMR17 - ILAS 2013

The classical numerical range of a complex square ma-trix A is the set

W (A) = x∗Ax : x ∈ Cn and ‖x‖2 = 1 ,

which is convex. We give a brief survey on the con-vexity of some generalized numerical ranges associatedwith semisimple Lie algebras. We provide another proofof the convexity of a generalized numerical range asso-ciated with a compact Lie group via a connectednessresult of Atiyah and a Hessian index result of Duister-maat, Kolk and Varadarajan.

Matrix canonical forms under unitary similaritytransformations. Geometric approach.

Yuri Nesterenko

Matrices A,B ∈ Cn×n are unitarily similar if a similar-ity transformation between them can be implementedusing a unitary matrix U : B = UAU∗. Since uni-tary transformations preserve some important charac-teristics of a matrix (e.g. condition number) this classof matrix transformations is very popular in applica-tions. We present method of verification of nonderoga-tory matrices for unitary similarity based on geometricapproach. Given an arbitrary nonderogatory matrix,we construct a finite family of unitarily similar matri-ces for it. Whether or not two matrices are unitarilysimilar can be answered by verifying the intersection oftheir corresponding families. From the geometric pointof view this canonical family is the finite set of the ruledsurfaces in the space of upper triangular matrices.

As a rule, based on the Schur upper triangular form,different methods ”tries” to create as many positive realoff-diagonal elements as possible. But such a propertyof a desired canonical form inevitably leads to the formunstable with respect to errors in initial triangular form.Our approach is free of this shortcoming. The stabilityproperty seems to be even more important due to thefact that usually an upper triangular form of a matrix isobtained by approximate methods (e.g. QR algorithm).

Contributed Session on Positive Definite Matri-ces

Preserving low rank positive semidefinite matri-ces

Dominique Guillot, Apoorva Khare, Bala Rajaratnam

We consider the problem of characterizing real-valuedfunctions f which, when applied entrywise to matricesof a given rank, preserve positive semidefiniteness. Un-derstanding if and how positivity is preserved in var-ious settings has been the focus of a concerted effortthroughout the past century. The problem has beenstudied most notably by Schoenberg and Rudin. Oneof their most significant results states that f preservesthe set of all positive semidefinite matrices, if and onlyif f is absolutely monotonic on the positive axis, i.e.,has a Taylor series with nonnegative coefficients.

In this work, we focus on functions preserving positiv-ity for matrices of low rank. We obtain several newcharacterizations of these functions. Additionally, ourtechniques apply to classical problems such as the onementioned above. In contrast to previous work, ourapproach is elementary, and enables us to provide intu-itive, elegant, and enlightening proofs.

Sparse positive definite matrices, graphs, andabsolutely monotonic functions

Dominique Guillot, Apoorva Khare, Bala Rajaratnam

We study the problem of characterizing functions, whichwhen applied entrywise, preserve the set of positivesemidefinite matrices with zeroes according to a fam-ily G = Gn of graphs. The study of these and othermaps preserving different notions of positivity has manymodern manifestations and involves an interplay be-tween linear algebra, graph theory, convexity, spectraltheory, and harmonic analysis. In addition to the in-herent theoretical interest, this problem has importantconsequences in machine learning (via kernels), in reg-ularization of covariance matrices, and in the study ofToeplitz operators.

We obtain a characterization for G an arbitrary collec-tion of trees; the only previous result along these lineswas for all complete graphs: G = Kn, in terms ofabsolutely monotonic functions. We discuss the case offractional Hadamard powers and of polynomials withnegative coefficients for trees. Finally, we find a strongercondition for preserving positivity for any sequence G =Gn of graphs with unbounded maximal degree, whichis only satisfied by absolutely monotonic functions.

Regularization of positive definite matrices: Con-nections between linear algebra, graph theory,and statistics

Bala Rajaratnam, Dominique Guillot, Brett Naul, AlHero

Positive definite matrices arise naturally in many areaswithin mathematics and also feature extensively in sci-entific applications. In modern high-dimensional appli-cations, a common approach to finding sparse positivedefinite matrices is to threshold their small off-diagonalelements. This thresholding, sometimes referred to ashard-thresholding, sets small elements to zero. Thresh-olding has the attractive property that the resulting ma-trices are sparse, and are thus easier to interpret andwork with. In many applications, it is often required,and thus implicitly assumed, that thresholded matricesretain positive definiteness. We formally investigate the(linear) algebraic properties of positive definite matri-ces which are thresholded. We also undertake a detailedstudy of soft-thresholding, another important techniqueused in practice. Some interesting and unexpected re-sults will be presented. Finally, we obtain a full char-acterization of general maps which preserve positivitywhen applied to off-diagonal elements, thereby extend-ing previous work by Schoenberg and Rudin.

90

Page 93: CMR17 - ILAS 2013

Speaker Index

(minisymposium and contributed talks)

Ilan Adler, page 52Marianne Akian, page 50Daniel Alpay, page 38Lee Altenberg, page 64Milica Andelic, page 80Marina Arav, page 71Jared Aurentz, page 62Haim Avron, page 70Marc Baboulin, page 72James Baglama, page 42Zhaojun Bai, page 67Christopher Bailey, page 89Joseph Ball, page 74Ravindra Bapat, page 85Jesse Barlow, page 86Alvaro Barreras, page 59Harm Bart, page 75Prashant Batra, page 84Kim Batselier, page 88Christopher Beattie, page 38Natalia Bebiano, page 61Robert Beezer, page 44Pascal Benchimol, page 40Adam Berliner, page 53Shankar Bhattacharyya, page 49Vladimir Bolotnikov, page 38Matthias Bolten, page 79Shreemayee Bora, page 68Abderrahman Bouhamidi, page 88Brice Boyer, page 76Maria Isabel Bueno Cachadina,page 83Steve Butler, page 54Rafael Canto, page 59Angeles Carmona, page 79Nieves Castro-Gonzalez, page 41Minerva Catral, page 54Minnie Catral, page 41Clemente Cesarano, page 87Xuzhou Chen, page 88Jianxin Chen, page 46Gi-Sang Cheon, page 39Man-Duen Choi, page 46Richard Cottle, page 52Gloria Cravo, page 81Anthony Cronin, page 84Dragana Cvetkovic-Ilic, page 41Carlos da Fonseca, page 39Geir Dahl, page 39Biswa Datta, page 49Bart De Bruyn, page 78Lieven De Lathauwer, page 65Eric de Sturler, page 43Fernando De Teran, page 68Louis Deaett, page 54Gianna Del Corso, page 59Gianna Del Corso, page 62Jorge Delgado, page 60Chunyuan Deng, page 42Holger Dette, page 56

Huaian Diao, page 42Huaian Diao, page 52Andrii Dmytryshyn, page 83Sergey Dolgov, page 65Sergey Dolgov, page 65Gregor Dolinar, page 78Esther Dopazo, page 42Froilan Dopico, page 86Vladimir Druskin, page 56Antonio Duran, page 56Natasa Durdevac, page 65Wayne Eberly, page 76Tom Edgar, page 44Yuli Eidelman, page 63Yuli Eidelman, page 72Andres Encinas, page 80Craig Erickson, page 71Shaun Fallat, page 54Quanlei Fang, page 38Joseph Fehribach, page 81Melina Freitag, page 49Pedro Freitas, page 88Shmuel Friedland, page 46Shmuel Friedland, page 66Akiko Fukuda, page 85Susana Furtado, page 83Wei Gao, page 71Colin Garnett, page 54Martin Gavalec, page 60Luca Gemignani, page 72Luca Gemignani, page 63Jeff Geronimo, page 56Alex Gittens, page 70Assaf Goldberger, page 50Jason Grout, page 44Dominique Guillot, page 90Edward Hanson, page 57Yorick Hardy, page 78Tomohiro Hayashi, page 61Leslie Hogben, page 71Olga Holtz, page 63James Hook, page 40Keith Hooper, page 85Jinchuan Hou, page 46Zejun Huang, page 47Marko Huhtanen, page 57Matthias Humet, page 63Youngmi Hur, page 79Ilse Ipsen, page 72Ilse Ipsen, page 70David Jacobs, page 81Carl Jagels, page 57Dawid Janse van Rensburg, page82Elias Jarlebring, page 68Marianne Johnson, page 40Charles Johnson, page 39Charles Johnson, page 60Charles Johnson, page 61Nathaniel Johnston, page 47

Shani Jose, page 85Marinus Kaashoek, page 75Erich Kaltofen, page 76Johan Karlsson, page 38Michael Karow, page 89Vladimir Kazeev, page 66Apoorva Khare, page 90Kathleen Kiernan, page 39Misha Kilmer, page 66Youngdae Kim, page 52In-Jae Kim, page 39David Kimsey, page 75Sandra Kingan, page 44Steve Kirkland, page 54Fuad Kittaneh, page 61Oliver Knill, page 54Woong Kook, page 55Rostyslav Kozhan, page 57David Kribs, page 47Nikolai Krivulin, page 40Hiroshi Kurata, page 85Olga Kushel, page 60Olga Kushel, page 51Seung-Hyeok Kye, page 47George Labahn, page 77James Lambers, page 86Stephane Launois, page 60Hosoo Lee, page 82Sang-Gu Lee, page 45Bas Lemmens, page 82Melvin Leok, page 49Izchak Lewkowicz, page 38Na Li, page 87Zhongshan Li, page 71Xiaoye Li, page 43Lek-Heng Lim, page 66Yongdo Lim, page 51Minghua Lin, page 61Brian Lins, page 51Yue Liu, page 81Xuhua Liu, page 89Leo Livshits, page 78Marie MacCaig, page 40Thomas Mach, page 72D. Steven Mackey, page 68Michael Mahoney, page 73Michael Mahoney, page 70Manuel Manas, page 57Francisco Marcellan, page 57Andrea Marchesini, page 41Jose-Javier Martınez, page 60Judi McDonald, page 72Judi McDonald, page 39Judi McDonald, page 45James McTigue, page 81Karl Meerbergen, page 68Christian Mehl, page 69Volker Mehrmann, page 50Andrey Melnikov, page 80Timothy Melvin, page 72

91

Page 94: CMR17 - ILAS 2013

Seth Meyer, page 55T. S. Michael, page 55Jadranka Micic Hot, page 82Margarida Mitjana, page 80Martin Mohlenkamp, page 67Jason Molitierno, page 55Lajos Molnar, page 47Keiichi Morikuni, page 53Julio Moro, page 51William Morrow, page 89Dijana Mosic, page 42Gergo Nagy, page 47Yuri Nesterenko, page 90Amir Niknejad, page 65Viorel Nitica, page 41Vanni Noferini, page 69Roger Nussbaum, page 51Polona Oblak, page 78Steven Osborne, page 55Michael Overton, page 50Victor Pan, page 73Victor Pan, page 73Victor Pan, page 77Victor Pan, page 63Pietro Paparella, page 85Diane Christine Pelejo, page 47Rajesh Pereira, page 62Javier Perez, page 84Vasilije Perovic, page 84Jennifer Pestana, page 43Marko Petkovic, page 87Caitlin Phifer, page 81Jan Plavka, page 60Sarah Plosker, page 48Federico Poloni, page 69Edward Poon, page 48Surya Prasath, page 87Panayiotis Psarrakos, page 48Xiaofei Qi, page 48Sanzheng Qiao, page 53

Zheng QU, page 82Rachel Quinlan, page 79Rachel Quinlan, page 45Hermann Rabe, page 75Bala Rajaratnam, page 90Lothar Reichel, page 58Daniel Richmond, page 43Leonardo Robol, page 63Leiba Rodman, page 75Yousef Saad, page 73Hashim Saber, page 80Alexander Sakhnovich, page 75Takashi Sano, page 62David Saunders, page 77Hans Schneider, page 61Ameur Seddik, page 62Peter Semrl, page 78Sergei Sergeev, page 61Bryan Shader, page 55Naomi Shaked-Monderer, page 86Meisam Sharify, page 41Jinglai Shen, page 52Ilya Spitkovsky, page 76Andreas Stathopoulos, page 44Sepideh Stewart, page 45Arne Storjohann, page 77Gilbert Strang, page 45Gilbert Strang, page 76Peter Strobach, page 63Mikhail Stroev, page 79David Strong, page 46Jeffrey Stuart, page 46Raymond Nung-Sing Sze, page 48Ferenc Szollosi, page 39Daniel Szyld, page 44Tin-Yau Tam, page 48Tin-Yau Tam, page 62Leo Taslaman, page 69Sanne ter Horst, page 38Paul Terwilliger, page 58

Ryan Tifenbach, page 39Francoise Tisseur, page 73Francoise Tisseur, page 69David Titley-Peloquin, page 53Maguy Trefois, page 86Eugene Tyrtyshnikov, page 67Andre Uschmajew, page 67Roel Van Beeumen, page 89Pauline van den Driessche, page 55Paul Van Dooren, page 74Paul Van Dooren, page 50Charles Van Loan, page 67Raf Vandebril, page 58Raf Vandebril, page 64Kevin Vander Meulen, page 40Bart Vandereycken, page 50Luis Velazquez, page 58Luis Verde-Star, page 87Gilles Villard, page 77Luc Vinet, page 59Dan Volok, page 38Nathan Warnberg, page 81Ryan Wasson, page 82David Watkins, page 64Megan Wawro, page 46James Weaver, page 86Marcus Weber, page 65Manda Winlaw, page 67Hugo Woerdeman, page 76Hugo Woerdeman, page 59Michal Wojtylak, page 84Jianlin Xia, page 74Yusaku Yamamoto, page 88Takeaki Yamazaki, page 62Michael Young, page 56Ion Zaballa, page 70Fuzhen Zhang, page 62Justyna Zwolak, page 48Helena Smigoc, page 51

92