convex parametric piecewise quadratic optimization: theory, … · 2019-09-26 · 1 convex...

22
1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications Panagiotis Patrinos a , Haralambos Sarimveis *a E-mail addresses: [email protected] (P. Patrinos), [email protected] (H. Sarimveis). Abstract In this paper we study the problem of parametric minimization of convex piecewise quadratic functions. Our study provides a unifying framework for convex parametric quadratic and linear programs. Furthermore, it extends parametric programming algorithms to problems with piecewise quadratic cost functions, paving the way for new applications of parametric programming in dynamic programming and optimal control. Key words: parametric optimization; algorithms and software; control of constrained systems 1. Introduction Sensitivity of solutions to nonlinear programs, variational inequalities and generalized equations subject to perturbations has been a subject of intense research during the last three decades (Fiacco, 1983, Rockafellar & Wets, 2009, Klatte & Kummer, 2002, Dontchev & Rockafellar, 2009, Facchinei & Pang, 2003). Early works included applications of the classical implicit function theorem (or Robinson’s, 1980 implicit function theorem) to establish the existence of a smooth (or Lipschitz continuous, respectively), single-valued localization of the solution mapping around a reference point of the parameter vector for a nonlinear program. Directional differentiability of the solution for parametric nonlinear programs (Ralph & Dempe, 1995) and variational inequalities (Facchinei & Pang, 2003) has been studied under conditions that guarantee uniqueness of the solution. Furthermore, formulas for computing the directional derivatives are presented in these works. The introduction of graphical derivatives for set-valued mappings by Aubin (1981) and the development of calculus rules for tangent cones of general sets (Aubin & Frankowska, 1990, Rockafellar & Wets, 2009) paved the way for developing graphical derivative formulas for set-valued solution mappings of parametric nonlinear programs (Levy & Rockafellar, 1995, Levy, 2001), variational inequalities (Levy & Rockafellar, 1996; Dontchev & Rockafellar, 2001b) and generalized equations (Levy & Rockafellar, 1994; Dontchev & Rockafellar, 2001a). These results allow one to draw conclusions for the solution mapping of the problem under investigation, when the data of the problem is slightly perturbed, without having to solve the original problem from scratch. Unlike sensitivity analysis which is restricted to study properties of the solution mapping of an optimization problem only on the vicinity of a reference value of the parameter vector, the goal of parametric optimization is to compute the solution mapping for every value of the parameter vector ranging in some set (Bank, Guddat, Klatte, Kummer & Tammer, 1982, Pistikopoulos, Georgiadis & Dua, 2007a, 2007b). It is evident that sensitivity analysis and parametric optimization are tightly related, and the profound results obtained in the first area can be used in algorithms for parametric optimization. This route was followed by Patrinos & Sarimveis (2010), who obtained formulas for the graphical derivative of the solution mapping for parametric linear and quadratic programs. Exploiting the fact that the solution mappings are polyhedral, these formulas where obtained in light of strong versions of calculus rules for images and inverse images of sets expressible as finite unions of polyhedral sets. Based on these formulas, they showed how one can compute all adjacent critical regions along a facet of an already discovered critical region efficiently, without any restrictive nondegeneracy assumptions. Interest in parametric optimization algorithms for linear and quadratic programs resurged during the last decade, due to the observation that certain optimal control problems for constrained systems can be solved explicitly off-line (Bemporad, Morari, Dua & Pistikopoulos, 2002, Bemporad, Borrelli & Morari, 2002, Borrelli, Bemporad & Morari, 2003). For developments in this field, see the recent survey, Alessio & Bemporad (2009). Parametric optimization has also enabled the explicit, exact solution (without gridding of the state-space) of dynamic programming equations for various classes of problems. Dynamic programming coupled with parametric programming has been employed by Bemporad et al. (2003) and Diehl and Bjornberg (2004) for min-max optimal control problems of constrained uncertain linear systems with a polyhedral performance index, by Baotić, Christophersen and Morari (2006) and Christophersen, Baotić and Morari (2005) for finite or infinite horizon optimal control of piecewise affine systems with polyhedral cost index, by Kerrigan and Mayne (2002) and Spjøtvold, Kerrigan, Mayne, Johansen (2009) for finite horizon

Upload: others

Post on 20-May-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

1

Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications Panagiotis Patrinosa, Haralambos Sarimveis*a

E-mail addresses: [email protected] (P. Patrinos), [email protected] (H. Sarimveis).

Abstract

In this paper we study the problem of parametric minimization of convex piecewise quadratic functions. Our study provides a unifying framework for convex parametric quadratic and linear programs. Furthermore, it extends parametric programming algorithms to problems with piecewise quadratic cost functions, paving the way for new applications of parametric programming in dynamic programming and optimal control.

Key words: parametric optimization; algorithms and software; control of constrained systems

1. Introduction

Sensitivity of solutions to nonlinear programs, variational inequalities and generalized equations subject to perturbations has been a subject of intense research during the last three decades (Fiacco, 1983, Rockafellar & Wets, 2009, Klatte & Kummer, 2002, Dontchev & Rockafellar, 2009, Facchinei & Pang, 2003). Early works included applications of the classical implicit function theorem (or Robinson’s, 1980 implicit function theorem) to establish the existence of a smooth (or Lipschitz continuous, respectively), single-valued localization of the solution mapping around a reference point of the parameter vector for a nonlinear program. Directional differentiability of the solution for parametric nonlinear programs (Ralph & Dempe, 1995) and variational inequalities (Facchinei & Pang, 2003) has been studied under conditions that guarantee uniqueness of the solution. Furthermore, formulas for computing the directional derivatives are presented in these works.

The introduction of graphical derivatives for set-valued mappings by Aubin (1981) and the development of calculus rules for tangent cones of general sets (Aubin & Frankowska, 1990, Rockafellar & Wets, 2009) paved the way for developing graphical derivative formulas for set-valued solution mappings of parametric nonlinear programs (Levy & Rockafellar, 1995, Levy, 2001), variational inequalities (Levy & Rockafellar, 1996; Dontchev & Rockafellar, 2001b) and generalized equations (Levy & Rockafellar, 1994; Dontchev & Rockafellar, 2001a). These results allow one to draw conclusions for the solution mapping of the problem under investigation, when the data of the problem is slightly perturbed, without having to solve the original problem from scratch.

Unlike sensitivity analysis which is restricted to study properties of the solution mapping of an optimization problem only on the vicinity of a reference value of the parameter vector, the goal of parametric optimization is to compute the solution mapping for every value of the parameter vector ranging in some set (Bank, Guddat, Klatte, Kummer & Tammer, 1982, Pistikopoulos, Georgiadis & Dua, 2007a, 2007b). It is evident that sensitivity analysis and parametric optimization are tightly related, and the profound results obtained in the first area can be used in algorithms for parametric optimization. This route was followed by Patrinos & Sarimveis (2010), who obtained formulas for the graphical derivative of the solution mapping for parametric linear and quadratic programs. Exploiting the fact that the solution mappings are polyhedral, these formulas where obtained in light of strong versions of calculus rules for images and inverse images of sets expressible as finite unions of polyhedral sets. Based on these formulas, they showed how one can compute all adjacent critical regions along a facet of an already discovered critical region efficiently, without any restrictive nondegeneracy assumptions.

Interest in parametric optimization algorithms for linear and quadratic programs resurged during the last decade, due to the observation that certain optimal control problems for constrained systems can be solved explicitly off-line (Bemporad, Morari, Dua & Pistikopoulos, 2002, Bemporad, Borrelli & Morari, 2002, Borrelli, Bemporad & Morari, 2003). For developments in this field, see the recent survey, Alessio & Bemporad (2009).

Parametric optimization has also enabled the explicit, exact solution (without gridding of the state-space) of dynamic programming equations for various classes of problems. Dynamic programming coupled with parametric programming has been employed by Bemporad et al. (2003) and Diehl and Bjornberg (2004) for min-max optimal control problems of constrained uncertain linear systems with a polyhedral performance index, by Baotić, Christophersen and Morari (2006) and Christophersen, Baotić and Morari (2005) for finite or infinite horizon optimal control of piecewise affine systems with polyhedral cost index, by Kerrigan and Mayne (2002) and Spjøtvold, Kerrigan, Mayne, Johansen (2009) for finite horizon

Page 2: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

2

inf-sup optimal control of piecewise affine systems with polyhedral cost index and by Borrelli, Baotić, Bemporad and Morari (2005) for problems with a quadratic cost index.

Constrained ∞H optimal control of linear systems with additive disturbances has been studied in Mayne, Raković, Vinter, Kerrigan (2006). The authors show that the value functions are strictly convex piecewise quadratic and the optimal solution is unique and piecewise affine under a restrictive nondegeneracy assumption that is needed to guarantee differentiability of the value function and is not possible to check a priori. However, one of the contributions of the paper is that they characterize the value function and optimal solution of parametric optimization problems with strictly convex piecewise quadratic objective function. For related results, see Mayne, Rakovic & Kerrigan (2007) and Rawlings & Mayne (2009).

In this paper, we first extend the results of Mayne et al. (2006) to the case where the objective function is convex-but not necessarily strictly convex- piecewise quadratic, thus allowing for the solution mapping to be multivalued. The problem we study is generic in the sense that parametric linear, quadratic, piecewise affine and piecewise quadratic objective functions fit into our model. Then, extending the framework developed in Patrinos & Sarimveis (2010) we provide a computable formula for the graphical derivative of solution mappings for piecewise quadratic optimization problems. This result enables the determination of all critical regions adjacent to a facet of an already discovered critical region (the so called adjacency oracle, cf. Jones, 2005) . Based on this fact we develop a new algorithm for parametric piecewise quadratic optimization problems using the graph traversal paradigm (Jones, 2005) for the exploration of the parameter space. The algorithm has compelling advantages over that of Mayne et al. (2006) since there is no need for solving a nonsmooth optimization problem of dimension equal to that of the original problem in order to discover critical regions. Instead, problems of much smaller dimension need to be solved taking full advantage of the rationale of sensitivity analysis where one can draw conclusions for the behavior of the solution set of an optimization problem when the data of the problem is slightly perturbed without having to solve the original problem from scratch. Furthermore, methods based on the graph traversal paradigm are the only known approaches that are able to achieve output sensitivity, i.e. their running time is a function of only the size of the input (problem dimension) and output (number of critical regions).

The proposed algorithm coupled with dynamic programming can be employed to compute the explicit solution of constrained stochastic/robust optimal control problems with a quadratic cost index. In Section 7 we present one such application regarding ∞H optimal control of constrained linear systems (Mayne et al., 2006). Another potential application is in computational convex analysis. Specifically, the family of convex piecewise quadratic functions is the largest class of convex functions that is closed under all fundamental convex operators, i.e. addition, scalar multiplication, conjugation, inf-projection, epi-composition and taking the Moreau envelope (Lucet, Bauschke, Trienis, 2009). The first two operations are straightforward. The algorithm developed in this paper is the only known algorithm that can compute the conjugate, inf-projection, epi-composition and the Moreau envelope of a piecewise quadratic function. We will exploit this property more in Section 7, where we approximate the nonsmooth piecewise quadratic value function of the constrained ∞H optimal control problem by its Moreau envelope, which is both piecewise quadratic and smooth. This allows to relax the restrictive differentiability assumption of the value functions imposed by (Mayne et al., 2006), and compute an arbitrarily close to optimal solution of the constrained ∞H optimal control problem.

The paper is structured as follows. Section 2 introduces basic notation and mathematical concepts used throughout the paper. Section 3 studies basic properties of piecewise quadratic functions and piecewise quadratic optimization problems. In Section 4, properties of the value function and solution mapping of the parametric piecewise quadratic optimization problem are given. Section 5 is the core section of the paper providing the graphical derivative formula for the solution set mapping. In section 6 we show how this fundamental result can be utilized to determine adjacent critical regions to a facet of an already discovered critical region and the main algorithm of the paper is presented. Section 7 presents an application of the proposed algorithm to the solution of the constrained ∞H optimal control problem, including an illustrative example. Conclusions are given in section 8.

2. Mathematical Preliminaries

2.1. Notation and basic definitions

The support of a vector ∈ nx R is defined as 0) { | }iiσ( >x x . If ×∈ m nA R is a matrix and 1[ , ]m⊆I is a set of row indices, where 1[ , ]m denotes the index set 1{ ,..., }m , then AI is the matrix formed by the rows of A whose indices are in I . [1, ] \c mI I , i.e. cI is the complement of I . pos 0( ) { | }A Ay y denotes the conic hull of the columns of A .

For a set nC ⊂ R , Cδ denotes the indicator function, i.e. 0( )Cδ =x , if C∈x and ( )Cδ = ∞x otherwise. The dimension of a set nC ⊂ R is the dimension of its affine hull (the smallest affine set that includes C ), and is denoted by dim C . An element C∈x is called an interior point of C if there exists a neighborhood of x that is contained in C . The set of all points interior to C is called the interior of C and is denoted by int C . We say that x is a closure point of a set C if there exists a sequence { } Cν ⊆x that converges to x . The set of all closure points is the closure of C and is denoted by cl C . The interior of a convex set C relative to its affine hull is the relative interior of C , denoted by ri C . If dim C n= then C is full dimensional and the relative interior of the convex set C coincides with its interior.

Page 3: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

3

( ) { ( ) | }F C F C∈x x denotes the image of nC ⊂ R under the mapping : n mF →R R . The inverse image of mD ⊂ R

under F is 1( ) { | ( ) }F D F D− ∈x x . The polar cone of C is 0{ | , }C C′= ∀ ∈y y x x . A polyhedral set is the intersection of a finite family of closed half-spaces. F is a face of the polyhedral set nS ⊂ R if it can be expressed as argmax C∈ ′x y x for some n∈y R . Given an s -dimensional polyhedral set nS ⊂ R , the facets of S are the 1( )s − -dimensional faces of S .

We denote by R the extended real line, i.e. [ , ]= −∞ ∞R . For an extended-valued function →R R: nf , dom { | ( ) }nf f∈ < ∞x xR is the effective domain of f and lev { | ( ) }n

a f f a∈x xR is the level set of f for a ∈ R . A function f is proper if ( )f < ∞x for at least one ∈Rnx and ( )f > −∞x for all ∈Rnx .

2.2. Set convergence

In this paper, the notion of set convergence in the Painlevé-Kuratowski sense will be employed (Rockafellar & Wets, 2009, ch. 4). For a sequence of sets { }C ν

ν∈N , the inner set limit lim inf C νν consists of all possible limit points of sequences

{ }ν ν∈x N with Cν ν∈x . The outer set limit lim sup C νν consists of all possible cluster points of sequences { }ν ν∈x N with

Cν ν∈x . When the inner and outer limits coincide, then the limit of the sequence, lim C νν exists.

2.3. Tangent and normal cones

The tangent cone (or contingent cone) of a nonempty set ⊂ nC R at a given vector ∈Cx , denoted ( )CT x is the set of all vectors ∈ nw R for which there exist sequences { } Cν ⊂x converging to x and positive scalars { }ντ converging to zero such that ( ) / wν ντ− →x x . It easily follows that in terms of set convergence the tangent cone can be written as

10( ) lim sup ( )CT Cτ τ−= −x x . A set C is geometrically derivable at x , if ( )CT x can be written as a full limit, i.e.

10( ) lim ( )CT Cτ τ−= −x x . If C is convex then it is geometrically derivable for every ∈Cx (Rockafellar & Wets, 2009,

Theorem 6.9). For C convex, ( ) cl{ 0 with }CT Cλ λ= ∃ > + ∈x w | x w . Additionally, for a convex set C , the normal cone of C at C∈x is ( ) { | , }CN C′ ′ ∀ ∈x v v x v x x .

2.4. First and second order generalized differentiability

We will make use of the following notions of generalized differentiation for →R R: nf . Consider the first order quotient functions 1( )( ) ( ( ) ( ))f f fτ τΔ τ− + −x dx x dx x , 0τ ≠

for f . The usual one-sided directional derivative of f at

domf∈x is:

( ; ) lim ( )( )f fτ τΔ0′ x dx x dx

while the subderivative is:

,d ( )( ) lim inf ( )( )f fτ τΔ′0 → ′dx dxx dx x dx .

Consider the second order quotient functions 2 2( | )( ) 2 ( ( ) ( ) )f f fτ τ τΔ τ− ′+ − −x v dx x dx x v dx

and 2 2( )( ) 2 ( ( ) ( ) d ( )( ))f f f fτ τ τ τΔ − + − −x dx x dx x x dx , 0τ > . Then, the second subderivative of f at x for v and dx is:

2 2,d ( | )( ) lim inf ( | )( )f fτ τΔ′0 → ′dx dxx v dx x v dx

while the second subderivative at x (without mention of v ) is: 2 2

,d ( )( ) lim inf ( )( )f fτ τΔ′0 → ′dx dxx dx x dx

The one-sided second directional derivative of f at domf∈x is:

20( ; ) lim 2 ( ( ) ( ) ( ; ))f f f fτ τ ττ

−′′ ′+ − −x dx x dx x x dx .

The subdifferential of the proper, convex and lower semicontinuous (lsc) function →R R: nf at x is the set-valued mapping:

( ) { | ( ) ( ) ( ), }nf f f∂ ′∈ + − ∀x v z x v z x zR

For : n df × →R R R and a point ( , ) domf∈x p , ( , )f∂x x p denotes the subdifferential of ( , )f ⋅ p at x .

Page 4: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

4

2.5. Set-valued mappings: Continuity and graphical derivatives

The symbol :S P X⇒ says that S is a set-valued mapping (or multifunction or point-to-set map), mapping elements of P to subsets of X . The graph of a set-valued mapping : d nS ⇒R R , is the set gph {( , ) | ( )}S S∈p x x p while its domain is dom { | ( ) }S S ≠ ∅p p .

: d nS ⇒R R is outer semicontinuous at p if and only if →p pν and ( )S Dν →p imply ( )D S⊂ p . It is inner semicontinuous at p if and only if →p pν and ( )S Dν →p imply ( )D S⊃ p (Rockafellar & Wets, 2009, Exercise 5.6). This is not the standard definition of inner and outer semicontinuity but it suffices for our purposes. A set-valued mapping

: d nS ⇒R R is called continuous if it is inner and outer semicontinuous. A set-valued mapping that is continuous and convex-valued ( ( )S p for each p ) admits a continuous selection (Rockafellar & Wets, 2009, 5.57).

A polyhedral multifunction is a set-valued mapping whose graph is the union of finitely many polyhedral sets (Robinson, 1981). We call graph-convex polyhedral, a multifunction whose graph is a (single) polyhedral set.

Graphical differentiation of set-valued mappings (see 8.G, 8.E in Rockafellar & Wets, 2009) is a notion of generalized differentiability based on the Painlevé-Kuratowski set convergence of their graphs. For a set-valued mapping : d nS ⇒R R consider any gph( , ) S∈p x . For each τ > 0 , consider the difference quotient multifunction

1( | )( ) ( ( ) )S Sτ

τ τ−Δ + −p x dp p dp x

The graphical derivative (or contingent derivative) ( | )DS p x is defined as the set-valued mapping whose graph is the outer set limit of the family of sets gph ( | )SτΔ p x . Notice that by the definition of the tangent cone this definition is equivalent to

gph( | )( ) ( , ) ( , )SDS T∈ ⇔ ∈dx p x dp dp dx p x .

The inner graphical derivative is defined likewise as the set-valued mapping whose graph is the inner set limit of the family of the family of sets gph ( | )SτΔ p x . Proto-differentiability of S at p for x entails these two limits being equal, hence in that case 0gph gph ( | ) lim ( | )DS Sτ τ= Δp x p x . Obviously, S is proto-differentiable at p for x if and only if gph S is geometrically derivable at ( , )p x .

3. Piecewise quadratic optimization problems

3.1. Piecewise quadratic functions

Properties of piecewise quadratic functions (PWQs for short) have been studied extensively in the literature, e.g. Rockafellar (1988), Sun (1992), Rockafellar and Wets (2009). The main reason for their study is that even though PWQs are nonsmooth, their simple structure allows for the determination of subderivatives and sugradients without the need for any constraint qualification. Furthermore, a rich calculus has been developed for such kind of functions. In fact, in Rockafellar and Wets (2009) PWQs (there termed as piecewise linear-quadratic functions to stress the fact that they can include linear pieces as well) are the building block of fully amenable functions, a class of functions with very favorable properties. We first present some definitions regarding collections of polyhedral sets. Definition 1. Consider the collection of nonempty sets { | }kC k= ∈C K where K is a finite index set. Then (a) C is called a partition of dD ⊆ R , if the sets in C are mutually disjoint and their union is D . (b) C is called a polyhedral decomposition of dD ⊆ R , if its members are polyhedral sets and: (i) k kC D∈ =∪ K , (ii) dim dimkC D= for all k ∈K , (iii) ri rikC C = ∅∩ for ,k ∈K , k ≠ . (c) C is called a polyhedral subdivision if it is a polyhedral decomposition and furthermore the intersection of any two members of C is either empty or a common proper face of both.

It is well known that the relative interiors of the faces of a non-empty convex set form a partition (Theorem 18.2, Rockafellar, 1970). Our definition of a polyhedral decomposition bears some resemblance with notions appearing in other works. For example, in Borrelli (2003) a collection of polyhedral sets satisfying conditions (i) and (iii) of (b) is called a polyhedral partition in the broad sense. In Bank et al. (1983), a collection satisfying all the requirements of (b) but requiring merely that each member of the collection to be non-empty convex is called a partitioning. Notice that the faces of the members of a polyhedral subdivision (Scholtes, 1994, Facchinei & Pang, 2003) induce a polyhedral complex (Grünbaum, 2003). Definition 2. A function : nf →R R is called piecewise quadratic (PWQ) if there exists a polyhedral decomposition

{ | }kC k= ∈C K of dom f , and ( ) ( )kf f=x x if kC∈x , k ∈K , where 1 2( ) ( / )k k k kf s′ ′= + +Qx x x q x with ks ∈ R ,

Page 5: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

5

vector nk ∈q R , and a symmetric matrix n n

k×∈Q R .

We assume without loss of generality that the dimension of domf is n , dim( )kC n= , { | }n k k

kC = ∈ Ax x bR and kk m n×∈A R . For each piece of the PWQ function f we associate the function

kk k Cf f δ+ .

Definition 3. For each domf∈x we call the index set ( ) { | }kk C= ∈ ∈x xK K the set of active pieces. For a non-empty polyhedral set { | }nC ∈ Ax x bR with m∈b R let ( )CF denote the collection of all non-empty

faces of C and ( )CG the collection of the relative interiors of all non-empty faces of C , i.e. ( ) {ri | ( )}C C∈F FG F . Also let ( )j CF

denote the collection of j -dimensional faces of C . Finally let ( ) { [1, ] | s.t. , }c cC m⊆ ∃ = <A Ax x b x bI I I IIB . For every index ( )C∈I B consider the following sets, { | , }c c

n∈ =A Ax x b x bI I I I IF R and { | , }c c

n∈ = <A Ax x b x bI I I I IG R . The following proposition summarizes well known properties regarding the faces of polyhedral sets.

Proposition 1. Consider the polyhedral set { | }nC = ∈ Ax x bR . Then:

(a) ( )CG is a partition of C . (b) ( ) { | ( )}C C= ∈IF IF B . (c) ri=I IG F , ( )C∈I B . (d) If , ( )C′ ∈I I B then ( )C′ ∈I I∩ B (e) If , ( )C′ ∈I I B with ′ ⊆I I then ′IF is a face of IF . (f) If 1 2, ∈x x IG , ( )C∈I B then 1 2( ) ( )C CN N=x x . We use the notation ( )CN IF for the normal cone of C at any point

ri∈x IF .

Proof. Part (a) is a general property of convex sets proved in Rockafellar (1970). Parts (b), (c) and (d) are proved in Scholtes (1994), proposition 2.1.3.1. Part (e) follows immediately from the definition of a face. Part (f) follows from (c) and Lemma 4.11 in Ewald (1996).

Consider a polyhedral decomposition { | }kC k= ∈C K of D . Let ( ) { ( )}k kC ∈KF C F , ( ) { ( )}k kC ∈KG C G

and ( ) { ( )}k kC ∈KB C B . For each D∈x let ( ) { ( ) | }∈ ∈x xJ G GG C . Let J consist of all sets ( )⊆J G C such that ( )= xJ J for some D∈x . For each ∈J J , let G ∈J G J G∩ , and clD GJ J . The following proposition is important since it defines a partition of domD f which plays a central role in the definition of a critical region for convex parametric piecewise quadratic optimization problems.

Proposition 2. Consider a polyhedral decomposition { | }kC k= ∈C K of D . Then (a) { | }G= ∈J JG J is a partition of D . (b) If 1 2, G∈x x J , ∈J J then 1 2( ) ( )=x xK K . We use the notation ( )K J for the set of active pieces for any G∈x J . (c) For every ∈J J , there exist unique ( ) ( )k kC∈I J B , ( )k ∈K J such that ( ) ( )kkG ∈=J K J I JG∩ , ( ) ( )kkD ∈=J K J I JF∩ . (d) For every ∈J J , ( )( )k kD C∈∈J K J∩F . (e) If C is a polyhedral subdivision then every face of DJ can be expressed as ( ) ( )kkD ′ ′ ′∈=J K J I JF∩ for some ′ ∈J J .

Proof. (a) This follows directly from the definition of the family J and the fact that for each D∈x and any k ∈K , either kC∉x or x belongs to the relative interior of a unique face of kC . (b) Since G ∈=J G J G∩ and each G is the relative interior of a face of some kC it follows that kG C⊆J for every k ∈K such that ( )kC ≠ ∅J ∩G . Hence, for any G∈x J

the set of active pieces is a constant set, say ( )K J . (c) We know from proposition 1(b) that each face of the polyhedral set kC can be expressed as kIF for some ( )k kC∈I B , its relative interior being kIG . Invoking proposition 1(a) and noticing the fact that GJ is a relatively open set, one can deduce that each ∈J J contains at most one ( )kC∈G G for ( )k ∈K J . Therefore for each ∈J J , there exist unique

( ) ( )k kC∈I J B , ( )k ∈K J such that ( ) ( )kkG ∈=J K J I JG∩ . We have clD G= =J J ( ) ( )ri kk∈ =K J I JF∩ ( ) ( )cl(ri )kk∈ =K J I JF∩ ( ) ( )kk∈K J I JF∩ . The second equality follows from proposition 1c, the third from theorem 6.5 in

Rockafellar (1970) and the fact that ( )ri kI JF , ( )k ∈K J have a point in common and the last equality follows from theorem 6.3 in Rockafellar (1970) and the fact that the faces of a polyhedral sets are closed. (d) This follows from (b), (c) and theorem 2.4.9 (ii) in Grünbaum (2003). (e) Since C is a polyhedral subdivision, from (b) and the fact that the intersection of any members of C is a common proper face of both or empty, it follows that DJ is a common face of

kC , ( )k ∈K J , say F . Therefore, every face of DJ

is also a common face of kC , say ′F with ′ ⊆F F . Obviously ( ) ( )′ ⊇K F K F . Therefore, every face of DJ can be expressed as ( )( )kk C′∈K F∩F . The claim now follows from (d).

Remark 1. When C comprises just one polyhedral set, J reduces to the collection of the relative interiors of its faces. Hence, the collection J generalizes that partitioning property to polyhedral decompositions. Rockafellar & Wets (2009)

Page 6: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

6

also define a partition of domf for PWQ functions in their proof of Theorem 11.14 (b). However, their partition is the one induced by the hyperplane arrangement corresponding to the half-spaces of the collection of the polyhedral sets. This causes unnecessary partitioning of dom f .

The following proposition summarizes basic first and second subdifferentiability properties of convex PWQ functions.

Proposition 3. Suppose that : nf →R R is convex PWQ. Then at any point dom f∈x : (a) The subderivative d ( )f x is equal to the directional derivative ( );f ′ ⋅x . Furthermore, it is convex piecewise linear with

domdom d ( ) ( )ff T=x x and it is given by:

d if ( )( ) ( ) , ( ), ( )kk Cf f T k′= ∇ ∈ ∈x dx x dx dx x xK

(b) The subdifferential ( )f∂ x is nonempty polyhedral, with:

( )( ) ( )k kf f∂ ∂∈= xx xK∩ .

(c) The one-sided second directional derivative of f at x for dx is proper, PWQ (but not necessarily convex) with domdom ( ; ) ( )ff T′′ ⋅ =x x :

2 if ( ; ) ( ) , ( ), ( )kk C

f f T k′′ ′= ∇ ∈ ∈x dx dx x dx dx x xK

(d) For every ( )f∂∈v x the second subderivative at x for v and dx is proper, convex and PWQ: 2

( , )d ( | )( ) ( ; ) ( )f f δ′′= + x vx v dx x dx dxC

where ( , ) { | ( ; ) }f ′ ′= =x v dx x dx v dxC is the critical cone.

Proof. Part (a) is proved in 10.21 in Rockafellar & Wets (2009). Regarding (b), from theorem 23.2 (Rockafellar, 1970), valid for any proper convex function and for any domf∈x :

( ) { | d ( )( ), }n nf f∂ ′= ∈ ∀ ∈x v v dx x dx dxR R .

Using part (a):

( )( ) { | ( ) , ( )}kk k Cf f T∂ ∈ ′′= ∇ ∀ ∈xx v v dx x dx dx xK∩ ( ){ | ( ) ( )}

kk k Cf N∈= ∈ ∇ +x v v x xK∩ ( ) ( )k kf∂∈= x xK∩

where the second equality follows by polarity of kCT and kCN . For parts (c) and (d) see proposition 13.9 in Rockafellar & Wets, (2009).

3.2. Properties of the solution set

Next, consider the following optimization problem:

inf ( )n f∈x xR (1)

where →R R: nf is proper, convex PWQ. Problem (1) is equivalent to 0inf ( )C f∈x x for a real-valued, proper, convex PWQ function 0 : nf →R R and C polyhedral, by considering 0 Cf f δ= + . Hence, polyhedral constraints can be incorporated in the extended-valued objective function. For each piece of the PWQ function f we associate the problem inf kf , where kk k Cf f δ= + . This is of course equivalent to inf ( )

kC kf∈x x . The following proposition presents properties of the solution set of piecewise quadratic optimization problems.

Proposition 4. Consider the problem inf f , where : nf →R R is proper, convex PWQ. (a) If inf f is finite then argmin f is nonempty and polyhedral. (b) arg min f∈x if and only if arg min kf∈x for every ( )k ∈ xK . (c) Suppose inf f a= > −∞ . Then ( )argmin (argmin )k a kf f∈= L∪ , where ( ) { | lev }a ka k f∈ ≠ ∅L K . (d) arg min f is nonempty if and only if there exist n

k ∈x R , kmkλ +∈ R such that ( ) 0k k k kf λ′∇ + =Ax for every k ∈K .

Proof.(a) See corollary 11.18 of Rockafellar and Wets (2009). (b) From generalized Fermat’s rule (prop. 10.1, Rockafellar & Wets, 2009), argmin f∈x if and only if 0 ( )f∂∈ x . But

( )( ) ( )kkf f∂ ∂∈= xx xK∩ as shown in proposition 2(b). Thus, 0 ( )f∂∈ x if and only if 0 ( )kf∂∈ x , ( )k ∈ xK , and the claim

is proved.

Page 7: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

7

(c) Fist, note that lev leva k a kf f∈= ∪ K for any a ∈R . Equivalently, ( )lev leva k a a kf f∈= L∪ . But arg min lev af f= . It suffices then to show that lev arg mina k kf f= . If this is not the case then there exists a a a′ < such that lev a kf′ ≠ ∅ . This means that lev a f′ ≠ ∅ arriving at a contradiction since lev a f′ is empty for any mina a f′ < = by definition.

(d) Since : nf →R R is proper and lsc, inf f is finite if and only if f is level-bounded ( prop. 1.9 in Rockafellar and Wets, 2009). This means that the set lev a f is bounded for every a ∈R . But since lev leva k a kf f∈= ∪ K , the condition is equivalent to the boundedness of the sets lev a kf , for all k ∈K . In turn, bounded lev a kf is equivalent to the condition that for all n

k ∈y R such that 0kkA y and 0k k =Q y , we have 0k k′q y . Invoking Farkas Lemma this is equivalent to

the existence of nk ∈x R , km

kλ +∈ R such that 0kk k k kλ′+ + =Q Ax q . This proves the claim.

Parts (b) and (c) of proposition 2 reduce to proposition 5 in Mayne et al. (2006) which concerns only strictly convex

piecewise quadratic optimization problems. In that special case we have arg min { }f = x and the index set ( )aL reduces to ( )xK . However, in the general situation when the solution set is not a singleton, ( )xK may not be constant for every

arg min f∈x and it may differ from ( )aL . On the other hand, determination of ( )xK is straightforward for any arg min f∈x , while determining the index set ( )aL is not an easy task. In fact, one has to solve all the quadratic

subproblems, infk ka f , k ∈K . Then mink ka a∈= K and ( ) arg mink ka a∈=L K . This is of course impractical if the number of pieces of the PWQ function is large. Calculation of ( )xK entails the solution of inf f via a method for nonsmooth convex programs such as subgradient, cutting plane or bundle methods (Bertsekas et al., 2003). Furthermore, in Louveaux (1978) an efficient algorithm, reminiscent of the cutting plane method, is proposed for the solution of convex piecewise quadratic optimization problems that terminates in a finite number of iterations.

From proposition 2(c), it is easy to notice that if the PWQ function is strictly convex ( kQ are positive definite), then argmin f is nonempty. The same observation holds if dom f is the union of a finite collection of polytopes (bounded polyhedral sets).

4. Parametric Piecewise Quadratic Optimization Problems

In this section we will examine properties of the parametric piecewise quadratic optimization problem:

( ) inf ( , )V fxp x p , ( ) argmin ( , )fxp x pX (2)

where : n df × →R R R is a proper, convex, PWQ function, i.e. ( , ) ( , )kf f=x p x p if ( , ) kC∈x p ,where:

12

( , ) k k kk k

k k kf s

′ ′⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤+ +⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥′⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦

Q RR S

x x q xx p p p r p

{( , ) | }k k kkC = +A Bx p x p b and { | }kC k= ∈C K is a polyhedral decomposition of domf .

Proposition 5. Consider problem (2) for a proper, convex PWQ function : n df × →R R R . Then: (a) V is proper, convex, PWQ. (b) X is a polyhedral multifunction, thus outer Lipschitz continuous relative to domV . Furthermore, if f is strictly convex then X is single valued, piecewise affine and thus Lipschitz continuous.

(c) ( )dom dom

s.t. 0

{ | , ( , ) }

{ | , , , }kk k k k k

f

f kλ λ

= ∃ ∈′∃ ∇ + = ∀ ∈Ax

p x x p

p x x p

X ∩

K

Proof. (a) See proposition 11.32(c) of Rockafellar and Wets (2009). (b) Notice that:

( ) { | 0 ( , )}f∂= ∈ =xp x x pX { | , (0, ) ( , )}f∂∃ ∈x y y x p

The first equality follows from the generalized Fermat’s rule, i.e. proposition 10.1 of Rockafellar and Wets (2009) and the second from the partial subgradient rule for PWQ functions (proposition 10.22 (c) of Rockafellar and Wets , 2009). Therefore, in order to draw conclusions about the structure of the set-valued mapping : d nX ⇒R R a careful investigation of the subdifferential needs to be performed. From proposition 3(b):

( ),( , ) {( , ) | ( , ) ( , ) ( , )}kk Ckf f N∂ ∈= − ∇ ∈x px p v y v y x p x pK∩

From proposition 2, we know that every ( , )x p belongs to some GJ for some ∈J J and that ( , ) ( )=x pK K J . By proposition 2(c) we actually have:

Page 8: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

8

( ) ( ) ( )

( ) ( ) ( )

, ( )

, ( )k k k

c c ck k k

k k k

k k k

kG

k

⎧ ⎫= + ∈⎪ ⎪= ⎨ ⎬< + ∈⎪ ⎪⎩ ⎭

A B

A B

x p bx

x p bI J I J I J

J

I J I J I J

K J

K J.

Using proposition 1(f) we arrive at:

{ }( ) ( )( , ) ( , ) ( , ) ( , ) ( )k kk k Cf f N∂ ∈= − ∇ ∈x p v y v y x pK J I JF∩ when ( ), G∈x p J

Consider for each ∈J J the polyhedral set:

( )

( , ) ,( , , , )

( , ) ( , ) ( ), ( )k kk C

DS

f N k

⎧ ⎫∈⎪ ⎪= ⎨ ⎬− ∇ ∈ ∀ ∈⎪ ⎪⎩ ⎭

x px p v y

v y x pJ

JI JF K J

Since gph f∂ is closed, it readily follows that gph f S∂ ∈= J J∪ J , i.e. it is a union of a finite collection of polyhedral sets. Let WJ be the inverse image of SJ under ( , , ) ( , , 0, )→y x p x p y and therefore polyhedral. Let MJ be the image of WJ under ( , , ) ( , )→y x p p x , hence polyhedral as well. Then since gph {( , ) | ,( , , 0, ) gph }f∂= ∃ ∈p x y x p yX , we have gph M∈= J JX ∪ J , i.e. it is a union of a finite collection of polyhedral sets. Outer Lipschitz continuity follows directly from the polyhedrality of X and theorem 3D.1 in Dontchev & Rockafellar (2009). In the case that f is strictly convex, the solution is unique for every domV∈p , hence piecewise affine. Lipschitz continuity follows from Corollary 3D.5 in Dontchev & Rockafellar (2009). (c) Trivially follows from proposition 4(d).

4.1. Definition of the critical region

Based on propositions 2 and 5, the following notion of critical region can be defined. Definition 4. The critical region for an index set ∈J J is the set { | ( ) s.t. ( , ) }∃ ∈ =p x p x pJR X J J

Next, consider the following sets:

clJ JR R

( )

( )

, ( , ) ,( )

(0, ) ( , ) ( ), ( )k

k kk Cf N k

⎧ ⎫∃ ∈⎪ ⎪⎨ ⎬− ∇ ∈ ∈⎪ ⎪⎩ ⎭

y x pp x

y x pI J

JI J

GX

F K J( )

( )

, ( , ) ,( )

(0, ) ( , ) ( ), ( )k

k kk Cf N k

⎧ ⎫∃ ∈⎪ ⎪⎨ ⎬− ∇ ∈ ∈⎪ ⎪⎩ ⎭

y x pp x

y x pI J

JI J

FX

F K J

domJ JQ X

domJ JQ X

Theorem 1. (a) =J JR Q and =J JR Q (b) ( ) ( )kk∈=J K J I JR R∩ with:

( )

( ) ( ) ( ) ( )

( ) ( ) ( )

( , ), ( , ) 0, 0

, k

k k k k

c c ck k k

kk k k k

k k k

k k k

fλ λ λ⎧ ⎫′∃ ∇ + =⎪ ⎪⎪ ⎪= +⎨ ⎬⎪ ⎪< +⎪ ⎪⎩ ⎭

A

A B

A B

xx x p

p x p b

x p b

I J

I J I J I J I J

I J I J I J

R

, i.e. JR is the intersection of closures of the critical regions of the parametric quadratic programs inf { ( , ) | ( , ) }k kf C∈x x p x p corresponding to the equality sets ( ) ( )k kC∈I J B , ( )k ∈K J . (c) For any ∈p JR , ( ) ( )⊇p pJX X . (d) Suppose that { | }kC k= ∈C K is a subdivision of domf . Consider a ( )F ∈ JRF . Then F ′⊆ JR for some

′ ∈J J .

Proof. (a) We have:

Page 9: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

9

( )

( )

{ | ( , ) s.t. ( , ) ,( , ) ( , )}

{ | ( , ),( , ) ,( , ) ( , )}

( , ),( , ) ,

( , ) ( , ) ( ), ( )k k

k k

C

G f

G f

G

f N k

∂∂∈

= ∃ ∈ ∈= ∃ ∈ ∈

⎧ ⎫∃ ∈⎪ ⎪= ⎨ ⎬∈ ∇ + ∈⎪ ⎪⎩ ⎭

00

0

p x y x p y x pp x y x p y x p

x y x pp

y x p

J J

J K J

J

I J

R

F K J

The first equality follows from proposition 2(b) and the parametric version of Fermat’s rule, the second from proposition 3(b) and the last from propositions 2(c) and 1(f). Obviously =J JQ R (corollary 6.3 in Rockafellar, 1970). (b) From (a) we know that:

( )

( , ),( , ) ,

( , ) ( , ) ( ), ( )k kC

D

f N k

⎧ ⎫∃ ∈⎪ ⎪= ⎨ ⎬∈ ∇ + ∈⎪ ⎪⎩ ⎭0x y x p

py x p

JJ

I J

RF K J

.

But from proposition 2(c) ( ) ( )kkD ∈=J K J I JF∩ . Hence

( )( )

( )

( , ),( , ) ,

( , ) ( , ) ( )k

k k

kCf N∈

⎧ ⎫∃ ∈⎪ ⎪= ⎨ ⎬∈ ∇ +⎪ ⎪⎩ ⎭0x y x p

py x p

I JJ K J

I J

FR

F∩

The claim is proved by invoking Theorem 2(a) in Patrinos & Sarimveis (2010). (c) This follows from the closedness of f∂ , being a polyhedral multifunction. (d) Since { | ,( , ) gph }= ∃ ∈p x p xJ JR X it follows that every face of JR is the projection of a face of gph JX . In turn, gph D L=J J JX ∩ , where:

( ){( , ) | ,(0, ) ( , ) ( ), ( )}k kk CL f N k= ∃ − ∇ ∈ ∈p x y y x pJ I JF K J

Hence every face of gph JX is the intersection of a face of DJ with a face of LJ . If domf is defined over a polyhedral subdivision then, through proposition 2(e), each face of DJ is expressible as D ′J for some ′ ∈J J . From theorem 1(c),

( ) ( )⊆p pJX X for any dom∈p JX . This means that if p belongs to some face of JR , then any ( )∈x pJX belongs to ( )pX . Consequently, we have ( , ) D ′∈x p J and there exists a d∈y R such that (0, ) ( , )f∂∈y x p , i.e. ( )′∈x pJX (this

follows by the definition of ′JX ). This means that each p that belongs to a face of JR actually belongs to ′JR for some ′ ∈J J .

Remark 2. The assumption appearing in theorem 1(d) is not as restrictive as it may seem at first glance. In fact, any polyhedral decomposition can be always transformed into a polyhedral subdivision. An algorithm that performs this transformation is given in the Appendix. This assumption will be very useful in the implementation of the adjacency oracle.

Remark 3. According to theorem 1(b), in principle, one could solve one convex parametric quadratic program (pQP) using algorithms such as Columbano, Fukuda & Jones (2009) and Patrinos & Sarimveis (2010), for each of the pieces of the PWQ function and store the corresponding solutions and value functions. Afterwards, one has to compare for each value of the parameter vector the corresponding value functions for all the pieces and find the indices of the pieces corresponding to the minimum value. Evidently, this procedure can become computationally prohibitive if the number of pieces of the PWQ function is large. Instead, in this paper, the proposed algorithm (see section 6) avoids to solve the pQPs corresponding to all the pieces, providing a more efficient and direct procedure for enumerating the closures of the full-dimensional critical regions of the parametric piecewise quadratic optimization problem. Definition 5. Two critical regions are called adjacent if the dimension of 1 2J JR R∩ is 1d − .

4.2. Calculation of the solution map

In this section we summarize methods for the computation of the closure of the full dimensional critical regions JR by making use of theorem 1(b) that relates critical regions of the convex parametric piecewise quadratic optimization problem with the critical regions of the parametric quadratic programs corresponding to the active pieces. First, if f is strictly convex then the solution is single valued and affine inside each JR and can be calculated by solving explicitly the equality constrained QP:

( ) ( ) ( )min { ( , ) | }k k k

k k kkf = +A Bx x p x p bI J I J I J

Page 10: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

10

for any ( )k ∈K J (Mayne et al., 2006). If for ( )k ∈K J the linear independence constraint qualification (LICQ) holds then one can calculate the Lagrange multipliers from:

( )( )( , ) 0kk

kkf λ′∇ + =Ax x p I JI J

and then obtain the polyhedral set ( )kI JR . If LICQ is violated for some ( )k ∈K J then one can compute ( )kI JR via projection. Then the closure of the full dimensional critical region is ( ) ( )kk∈=J K J I JR R∩ .

However, if f is merely convex, then one can calculate the critical regions of the subproblems via projection and find the minimum-norm, single-valued piecewise affine expression of the solution valid in JR by solving the strictly convex parametric quadratic program 1

2min{ | ( ), }′ ∈ ∈x x x p pJ JX R as Spjøtvold et al.(2007) do for convex parametric

quadratic programs.

5. Proto-differentiability and graphical derivative formula of the solution mapping

In this section, we will provide the graphical derivative formula for the solution mapping of a parametric piecewise quadratic optimization problem. The results will be based on the calculus rules for tangent cones to images and inverse images of sets expressible as finite unions of polyhedral sets, developed in Patrinos & Sarimveis (2010), which are summarized in Theorem 2. Its proof can be found in proposition 2 and theorem 4 of Patrinos & Sarimveis (2010).

Theorem 2 (a) Let k kC C∈K∪ , where each n

kC ⊂ R is a polyhedral set and K is a finite index set. Then C is geometrically derivable at any point C∈x , with:

( ) ( )kC k CT T∈=x xK∪

(b) Let ( )D F C= where nC ⊆ R is a set expressible as the union of a finite collection of polyhedral sets, i.e. k kC C∈= K∪ , and : n mF →R R is an affine mapping, i.e. ( )F = +Fx x f . At any D∈y one has:

1( )( ) ( )D CF CT T−∈= Fx yy x∩∪

(c) Let 1( )C F D−= where mD ⊆ R is a set expressible as the union of a finite collection of polyhedral sets, i.e. k kD D∈K∪ , and : n mF →R R is an affine mapping, i.e. ( )F +Fx x f . At any C∈x one has:

1( ) ( ( ))C DT T F−= Fx x

In theorem 3, it is proved that the partial subgradient mapping of a convex, PWQ function is proto-differentiable and a formula for its graphical derivative is provided. This result is an important special case of theorem 1.1 in Levy & Rockafellar (1996), where the graphical derivative formula is valid without the need to impose any constraint qualification. Theorem 3. Let : n df × →R R R be convex, PWQ. For any dom ( , ) f∈x p and any ( , )f∂∈ xv x p , the partial subgradient mapping is proto-differentiable at ( , )x p for v , with:

( ) ( )

: , ,

( )( , | )( , ){ | , ( , ) ( )( , | , )( , )}

f

D fD f

∂∂

=∃ ∈

x

y v y x p

x p v dx dpdv dy dv dy x p v y dx dp∪

Proof. It was shown in the proof of proposition 5 that the graph of the subgradient mapping of a PWQ function is a union of a finite collection of polyhedral sets. From Rockafellar & Wets (2009), exercise 10.22(c) we have that

( , ) { | , ( , ) ( , )}f f∂ ∂= ∃ ∈x x p v y v y x p , therefore gph f∂x is the image of gph f∂ under the linear mapping ( , , , ) ( , , )→x p v y x p v . It follows that the graph of f∂x is the union of a finite collection of polyhedral sets as well. From theorem 2(a) it follows that f∂x is proto-differentiable. Regarding the proto-derivative formula, it suffices to use theorem 2(b) regarding tangent cones to image sets of finite unions of polyhedral sets under an affine mapping.

Theorem 4. Let domV∈p and a ( )∈x pX . Let 1 0( , ) { [ , ] | }k k k

k k i i ii m= ∈ − − =A Bx p x p bI , ( , )k ∈ x pK . Then the graphical derivative of X at p for x is given by the formula:

( )( ) argmin ( , ; , )D h= dxp | x dp x p dx dpX (3)

where

Page 11: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

11

212

( , ; , ) ( , ) ,

( , ; ), ( , )

k

k

h f

S k

′⎡ ⎤ ⎡ ⎤∇⎢ ⎥ ⎢ ⎥

⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦∈ ∈

dx dxx p dx dp x p

dp dp

dx x p dp x pK

with

( )

( )

,

,

, ( )( , ; )

, ( , ) \ ( )k

k

k ki i

k k ki i k

iS

i

σ λ

σ λ

⎧ ⎫= ∈⎪ ⎪⎨ ⎬∈⎪ ⎪⎩ ⎭

A B

A Bx p

x p

dx dpx p dp dx

dx dp x pI

II(4)

and ( , ), ( , )k kλ ∈x p x pI K solve the linear program:

( , )( , )

, ( , )( , )

( , )

0 ( , )

max ( , )

0, ( , )

kk

k kk

k

kk

kk

f

f

λ

λλ

⎧ ⎫= ∇ +⎪ ⎪⎪ ⎪′ = ∇ −⎨ ⎬⎪ ⎪∈⎪ ⎪⎩ ⎭

A

Bx x px p

y p x px p

x p

x p

dp y y x px p

II

II

I K (5)

Proof. Recall that the solution mapping can be expressed as:

( )pX = { | 0 ( , )}f∂∈ xx x p = { | ,(0, ) ( , )}f∂∃ ∈x y y x p

Working with the graphs of the set-valued mappings we have that gph {( , ) | ( , ) gph }F f∂= ∈ xp x p xX , where ( , ) ( , , 0)F p x x p . Then 1

gph gph( , ) ( , ,0)x fT T ∂

−= Fp x x pX from Theorem 2(c). Thus:

( | )( )D∈dx p x dpX ⇔ gph( , ) ( , )T∈dp dx p xX

⇔ 1

gph( , ) ( , ,0)x fT ∂

−∈Fdp dx x p ⇔ gph( , ,0) ( , ,0)

x fT ∂∈dx dp x p ⇔ 0 ( )( , | 0)( , )D f∂∈ x x p dx dp .

Invoking theorem 3, we arrive at:

( , )

( | )( )

0 { | ,( , ) ( )( , | 0, )( , )}M

D

D f∂∈

∈ ⇔∈ ∃ ∈y x p

dx p x dpdv dy dv dy x p y dx dp

X∪

(6)

where ( , ) { | (0, ) ( , )}M f∂∈x p y y x p . Now, : n df × →R R R , being proper, convex and PWQ, is prox-regular and subdifferentially continuous (ex. 13.30, Rockafellar & Wets, 2009) at ( , )x p for (0, ) ( , )f∂∈y x p and twice epi-differentiable (prop. 13.9 Rockafellar & Wets, 2009) at ( , )x p for (0, )y . Hence (th. 13.40, Rockafellar & Wets, 2009)

( )( , | 0, )D f h∂ ∂=x p y for 212d ( , | 0, )h f= x p y (7)

From proposition 3(d) we have that:

1( , , ; )2

( , ; , ) ( ( , ; , ) ( , ))h f δ′′= + x p y dpx p dx dp x p dx dp dx dpC

where ( , , ; ) { | d ( , )( , ) }f ′= =x p y dp dx x p dx dp dp yC . This comes down to:

21( , ; , ) ( , ) ,

2

( , ) ( , ) ( , , ; )k

k

C

h f

T

⎡ ⎤⎡ ⎤′ ′= ∇ ⎢ ⎥⎣ ⎦

⎢ ⎥⎣ ⎦∈

dxx p dx dp dx dp x p

dp

dx dp x p x p y dpC∩ (8)

Combining (6) and (7), ( | )( )D∈dx p x dpX if and only if dx is a solution of the convex piecewise quadratic optimization problem:

inf ( , ; , )hdx x p dx dp (9)

In order to simplify the expression of the critical cone, from theorem 8.30 (Rockafellar & Wets, 2009), one has d ( , )( , ) max{ | (0, ) ( , )}f f∂′= ∈x p dx dp dp y y x p . But the set ( , )M x p is the same for all ( )∈x pX , since f is convex (Theorem 10.13, Rockafellar & Wets, 2009). Furthermore, notice that

Page 12: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

12

max{ | (0, ) ( , )}f∂′ ∈dp y y x p (10)

is a linear program, since invoking proposition 3(b) we obtain

( , )( , )

( , )( , )( , )

( , )( , )

,

( , ) ( , ) ( , )

( , )

k

k

kk

kk

kk

kk

k

f f

f

λ

∂ λ

λ∈

⎧ ⎫∃ ∈⎪ ⎪⎪ ⎪= = ∇ +⎨ ⎬⎪ ⎪= ∇ −⎪ ⎪⎩ ⎭

A

B

x px p

x x px px p

p x px p

x p v y v x p

y x p

II

IIK

II

∩R

(11)

i.e. a polyhedral set. Therefore (10) is equivalent to the linear program appearing in (5). Denote the y -part of solution set of (5) by ( ; )p dpCY . Since ( , )M x p is the same for all ( )∈x pX , ( , , ; )x p y dpC is a constant polyhedral set, independent of

( ; )∈y p dpCY . Denote this set by ( , ; )x p dpC . Summarizing,

( , ; ), if ( ; )( , , ; )

, otherwise

⎧ ∈⎪= ⎨ ∅⎪⎩

x p dp y p dpx p y dp

CC YC (12)

Next, we will refine the expression of the polyhedral sets defining the PWQ in (8). Consider a ( , ; )∈dx x p dpC and suppose that ( , ) ( , )kCT∈dx dp x p for some ( , )k ∈ x pK . From proposition 3(c) we obtain that:

d ( , )( , ) ( , ) ( , )k kf f f′ ′= ∇ + ∇x px p dx dp x p dx x p dp

Next, consider a ( ; )∈y p dpCY . Since ( , ; )∈dx x p dpC we obtain ( , ) ( , )k kf f ′∇ ′ + ∇ ′ =x px p dx x p dp y dp . But, from ( ; )∈y p dpCY we have that:

( , )( , )( , ) kk

kkf λ∇ = + ′Bp x px px p y II

( , )( , )( , ) kk

kkf λ∇ = − ′Ax x px px p II , 0( , )kλ x pI , ( , )k ∈ x pK .

Therefore, the above equation becomes:

0( , ) ( , ) ( , )( ) , ( , ) ( , )k kk k

k kCTλ ′ − = ∈A Bx p x p x pdx dp dx dp x pI I I (13)

It readily follows that:

( ) ( )

( ) ( )( ) ( , )

( , )

, , , ;

,,

, ( , ) \

k

k

k

C

k ki i

k kki i

T

σ λ

σ λ

=⎧ ⎫=⎪ ⎪⎨ ⎬⎪ ⎪⎩ ⎭

A B

A Bx p

x p

x p x p y dp

dx dpdx dp

dx dp x pI

I

C

I

(14)

and the theorem is proved.

Concerning the LP in (5), one can easily see that ( ; )p dpCY is nonempty if and only if Slater’s condition holds (Bertsekas et al., 2003) for at least one of the QP subproblems, inf ( , )kfx x p , ( , )k ∈ x pK . We call ( ; )p dpCY the directional multiplier set. In that case, ( , ; )kS x p dp is nonempty for some ( , )k ∈ x pK . If f is strictly convex, problem (3) always has a solution and the graphical derivative is a nonempty polyhedral set. However, when for dom∈p X the set ( )pX

is multi-

valued, the graphical derivative ( | )( )D p x dpX for a specific solution x maybe empty, i.e. arg min ( , ; , )hdx x p dx dp is

empty. This situation reflects the fact that the solution set map is outer semicontinuous as a polyhedral multifunction but may fail to be inner semicontinuous. Lack of inner semicontinuity implies that the set of reachable points:

0( ; ) { | , , ( )}ν ν ν ντ τ∃ ↓ → ∈ +p dp x x x x p dpRP X

is in general a proper subset of ( )pX . The elements of ( ; )p dpRP are exactly the ones for which ( | )( )D p x dpX is nonempty. The next theorem gives a formula for testing if the graphical derivative is non-empty for a solution ( )∈x pX .

Theorem 5. Consider a ( )∈x pX . Let ( ) argmin { ( , ) | ( , ) }k k kf C∈xp x p x pX for ( , )k ∈ x pK . Then ( | )( )D p x dpX is nonempty if and only if x solves the linear program:

Page 13: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

13

( ),

min { ( ) | ( ), ( , )}k k

k

k∈

′ ∈ ∈∑ Rxx p

dp x x p x pK

X K (15)

Proof. Consider any ( )∈x pX . Denote the λ -part of the solution set of (5) by ( ; )p dpCL . Consider any ( ),{ } ( ; )k kλ ∈ ∈x p p dpCK L . Consider the following index sets:

1

2

3

1 01 01 0

{ [ , ] | }

{ [ , ] | }

{ [ , ] | }

k k ik i i i k

k k ik i i i k

k k ik i i i k

i m

i m

i m

λ

λ

λ

∈ − − = <

∈ − − = =

∈ − − < =

A B

A B

A B

x p b

x p b

x p b

I

I

I

(16)

for every ( )k ∈K J . From proposition 4(b) we have that ( )k∈x pX , ( , )k ∈ x pK . From proposition 1(iv) in Patrinos and Sarimveis (2009):

( ) , ( )

, ( )

k k

k k kk i i i k

k k ki i i k

i

i

σ λ

σ λ

⎧ ⎫=⎪ ⎪⎪ ⎪= = + ∈⎨ ⎬⎪ ⎪+ ∉⎪ ⎪⎩ ⎭

Q Q

A B

A B

x x

p x x p b

x p b

X

Then, (15) is equivalent to:

( )1

2 3

,

, ( , )

min ( ) ,

,

k k

k k k kk i i i

k k k k k ki i i

k

i

i∈

⎧ ⎫= ∈⎪ ⎪⎪ ⎪′ = + ∈⎨ ⎬⎪ ⎪+ ∈⎪ ⎪⎩ ⎭

∑Q Q

R A B

A Bx

x p

x x x p

dp x x p b

x p bK

K

I

I I∪ (17)

Since ( | )( )D p x dpX is the solution set of the convex piecewise quadratic optimization problem (3), it is nonempty if and only if there exist n

k∈dx R , kk mλ ∈d R such that 0k k

k k k λ′+ + =Q R Adx dp d , 2

0k

kλdI

, 3

0k

kλ =dI for every

( , )k ∈ x pK (this extends proposition 4(d) to piecewise quadratic optimization problems with equality constraints). This condition can be written equivalently as:

2 3 2 3 2 32 3

00( )k k k k k kk k

k kk k k

k

λ

λ

+ + ′ =′ − − =

Q R AA B

dx dp dd x p bI I I I I II I ∪ ∪ ∪∪

(18)

for every ( )k ∈K J . The dual of (17) is:

( )2 3

0

0, ,

min ( ) ( ), ( , )

k kk

k

k kk k kk

k k k k

kk kλ

λλ

λ

⎧ ⎫+ + =⎪ ⎪′ ′+ +⎨ ⎬∈⎪ ⎪⎩ ⎭

∑Q R A

B Qd dx x p

dx dp dp b d x dx

d x pI IK K∪

It follows that x is a minimizer of (17) if and only if there exist ∈ ndx R and kmkλ ∈d R , ( , )k ∈ x pK that are feasible to the dual and complementarity slackness holds, i.e. the condition appearing in (18) is satisfied.

For the rest of the section we consider the restricted map JX for any ∈J J such that JR is nonempty. Specifically,

we will show that for any ( )∈x pJX the graphical derivative is either empty or a constant polyhedral set independent of x . Since the solution set of (15) depends only on the set of active pieces ( , )x pK , it is a constant polyhedral set for any

( )∈x pX such that ( , ) =x pJ J , i.e. ( )∈x pJX . Hence the subset of ( )pJX for which the graphical derivative is

nonempty is equal to the set of optimal solutions of (15) , which we denote ( ; )p dpCJX . We will call ( ; )p dpC

JX the directional solution set at p for J in the direction dp . Now, notice that (15) can be rewritten as

0( )min { ( ) | ( , ) ( , ), ( )}k k kf k∂∈ ′Σ ∈ ∈Rx dp x y x pK J K J . Thus for any ∈p JR , instead of first computing an element of ( ; )p dpCY by solving (10) and then an element of ( ; )p dpC

JX by solving (17), one can solve the following linear program:

0,

( )

min{ ( ) | ( , ) ( , ), ( )}k kk

f k∂∈

′ ′− ∈ ∈∑ Rx y

dp x dp y y x pK J

K J (19)

This follows from the simple minimization rule in Rockafellar & Wets (2009), exercise 1.36. Using proposition 1(iv) of Patrinos & Sarimveis (2010), the linear program (19) can be rewritten as:

Page 14: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

14

( ) ( ) ( )

( )0

0 0

( ) ( )

( ) ( ), ,( )

( ) ( ) ( )

( ) ( ) ( )

,

,

min ( ) , , ( )

k k

k k

ck k

k k k

c c ck k k

k kk

k kk

k kk

k k k k

k k k

f

f

λ

λ

λ λ∈

⎧ ⎫′= ∇ +⎪ ⎪⎪ ⎪′= ∇ −⎪ ⎪⎪ ⎪′ ′− = ∈⎨ ⎬⎪ ⎪= +⎪ ⎪⎪ ⎪+⎪ ⎪⎩ ⎭

A

B

R

A B

A B

x

p

x y

x p

y x p

dp x dp y

x p b

x p b

I J I J

I J I J

I J I JK J

I J I J I J

I J I J I J

K J (20)

Let ( )( ,{ } )kkλ ∈x K J

belong to the relative interior of ( ; ) ( ; )×p dp p dpC CJX L (i.e. the ( , )λx - part of the solution set of (20))

and partition the set of constraints of the active pieces as follows:

1

2

3

1 01 01 0

, ,

, ,

, ,

( ) { [ , ] | }

( ) { [ , ] | }

( ) { [ , ] | }

k k k kk k i i i i

k k k kk k i i i i

k k k kk k i i i i

i m

i m

i m

λ

λ

λ

′ = ∈ − − = <′ = ∈ − − = =′ = ∈ − − < =

A B

A B

A B

p x p b

p x p b

p x p b

J

J

J

I

I

I

(21)

for every ( )k ∈K J . Then the polyhedral sets appearing in (4) can be expressed as:

1

2

, ,

, ,

, ( )( ; )

, ( )

k ki i kn

k k ki i k

i

i

⎧ ⎫′= ∈⎪ ⎪= ∈⎨ ⎬′∈⎪ ⎪⎩ ⎭

A B

A B

dx dp pp dp dx

dx dp pJJ

J

ID

IR

(22)

for every ( )k ∈K J . We call ( ; )k p dpJD , the directional critical sets of X for J at p in direction dp . Notice that we have suppressed the dependence in x , since following the reasoning in Patrinos & Sarimveis (2010), one can show that these are constant polyhedral sets, independent of ( )( ,{ } )k

kλ ∈ ∈x K J ri( ( ; ) ( ; ))×p dp p dpC CJX L . We have just proved the

following corollary: Corollary 1. Consider a ∈p JR . Then the graphical derivative of X at p for any ( )∈x pJX is nonempty and equal to a constant polyhedral set (independent of x ) if and only if ( ; )p dpCY is nonempty and ( ; )∈x p dpC

JX . In that case, the graphical derivative of X at p is equal to the solution set of the following piecewise quadratic optimization problem:

min ( ; , )hdx p dx dp

where:

12

if

( ; , )

, ( ; ), ( )

k k

k k

k

h

k

′⎡ ⎤ ⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥ ⎢ ⎥′⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦

∈ ∈

Q RR S

dx dxp dx dp

dp dp

dx p dpJD K J

(23)

6. Determining all adjacent critical regions

In this section we will show how one can compute all adjacent critical regions along a facet of a full dimensional critical region for the parametric piecewise quadratic optimization problem (2). The following theorem shows how to identify the set of active pieces and their equality sets, when moving a small step along a normalized direction from a parameter point for which the set J is already known. Here, JR is not necessarily full-dimensional, e.g. the set J could characterize a proper face of the closure of a full dimensional critical region.

Theorem 6. Consider a ∈p JR and let ∈ ndp R be a normalized direction, i.e. 1=dp . Let

, ,1 , ,2 , ,3 ( ){ ( ), ( ), ( )}k k k k∈′ ′ ′p p pJ J J K JI I I be the partition of the constraints as in (21) corresponding to a solution belonging to the relative interior of the solution set of the LP in (20). Let:

( ; ) inf { ( ; , ) | ( ; )}

( ; ) argmin { ( ; , ) | ( ; )}k k k

k k k

V h

h

′′ ∈′′ ∈

dx

dx

p dp p dx dp dx p dp

p dp p dx dp dx p dp

J

J

D

X D(24)

for every ( )k ∈K J , where the directional critical sets ( ; )k p dpJD are given by (22). Let:

( )min ( ; )k ka V∈ ′′p dpK J ( ; ) { ( ) | ( ; ) }k

k V a′′∈ =p dp p dpL K J .

Page 15: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

15

and

2, , ,( ) { ( ) | , ( ; )}k kk k i i ki′′ ′ ′′∈ = ∀ ∈A Bp p dx dp dx p dpJ JI I X

for every ( ; )k ∈ p dpL . For sufficiently small 0ε > and for every 0( , )τ ε∈ let ( ) ( , )τ τ τ+ +x dx p dpJ J . Then:

( ( )) ( ; )τ = p dpK J L

1, , ,( ( )) ( ) ( )k k kτ ′ ′′= p pJ JI J I I∪ , ( ( ))k τ∈K J .

Proof. Combining proposition 4(c) , theorem 4 and corollary 1 we obtain that ( | )( )D =p x dpX

( ; ) ( ; )k k∈ ′′p dp p dpL X∪ for any ( ; )∈x p dpC

JX . Since X is a polyhedral multifunction we have the following exact approximation property (Theorem 6.4, Klatte, Kummer, 2002): For any ( ; )∈x p dpC

JX and sufficiently small 0>ε , it holds :

0( | )( ) { | ( ), ( , )}D τ τ τ ε= + ∈ + ∀ ∈p x dp dx x dx p dpX X

First notice that ( ( )) ( )τ ⊆K J K J . Now, if ( ; )k ∉ p dpL then ( , ) ( , )kCT∉dx dp x p for any ( | )( )D∈dx p x dpX . Since:

( ) ( )( )

k k k

k k k k kτ τ

τ+ − + − =

− − + −A BA B A Bx dx p dp bx p b dx dp

and 0k k ki i i− − =A Bx p b for , ,1 , ,2( ) ( )k ki ′ ′∈ p pJ JI I∪ , it follows that 0( ) ( )k k k

i i iτ τ+ − + − >A Bx dx p dp b for some [1, ]ki m∈ . Therefore, ( , ) kCτ τ+ + ∉x dx p dp for any ( )τ τ+ ∈ +x dx p dpX . Next, consider a ( ; )k ∈ p dpL and

distinguish between the following cases:

(a) 1, , ,( ) ( )k ki ′ ′′∈ p pJ JI I∪ : Then 0k k ki i i− − =A Bx p b for every ( ; )∈x p dpC

JX and k ki i=A Bdx dp for every

( | )( )D∈dx p x dpX . Thus, 0( ) ( )k k k

i i iτ τ+ − + − =A Bx dx p dp b for every ( )τ τ+ ∈ +x dx p dpX (b) 2, , ,( ) \ ( )k ki ′ ′′∈ p pJ JI I : Then 0k k k

i i i− − =A Bx p b for every ( ; )∈x p dpCJX

and there exists a ( | )( )D∈dx p x dpX , such that k k

i i<A Bdx dp . Thus 0( ) ( )k k k

i i iτ τ+ − + − <A Bx dx p dp b for some ( )τ τ+ ∈ +x dx p dpX . (c) ( )3, ,ki ′∈ pJI : 0k k k

i i i− − <A Bx p b for some ( ; )∈x p dpCJX .

Thus 0( ) ( )k k ki i iτ τ+ − + − <A Bx dx p dp b , for small enough 0( , )τ ε∈ .

We have just proven that ( ( )) ( ; )τ = p dpK J L and 1, , ,( ( )) ( ) ( )k k kτ ′ ′′= p pJ JI J I I∪ , ( ( ))k τ∈K J as promised. Remark 4. For strictly convex parametric piecewise quadratic optimization problems, JX is single-valued, and equal to

( ; )p dpCJX . Therefore, instead of solving LP (20), we merely need to solve the LP appearing in (5) which is of smaller

dimension. For the rest of this section, we impose the following assumption.

Assumption 1. The proper, convex, PWQ function : n df × →R R R is defined over a polyhedral subdivision. We have seen in remark 2 that this assumption is not restrictive. At the same time it simplifies things regarding the

implementation of the adjacency oracle since it allows the application of theorem 4. Specifically, under this assumption, theorem 1(d) assures that every facet of the closure of a full dimensional critical region belongs to the closure of a (possibly lower dimensional) critical region ′JR for some ′ ∈J J , hence the set of active pieces and the equality sets of the active pieces are constant along each facet. Notice that the index set ′J for each facet of JR can be obtained as a byproduct during the computation of JR at almost no extra computational cost. Based on this observation and theorem 6, the following algorithm determines all critical regions that are adjacent to the closure of a critical region along a facet.

Algorithm 1: Adjacency Oracle Input: Normalized facet aff{ | }F g′= =p dp pJR ∩ and ′J such that F ′⊆ JR Output: All sets 1adj{ }j j=J defining full dimensional critical regions adjacent to F . 1: Solve the pLP:

, ,( )

min{ ( ) | ( , ) ( , ), ( )}k kk

f kλ

∂′∈

′ ′ ′− ∈ ∈∑ R 0x y

dp x dp y y x pK J

K J

F∈p . Obtain triplets , ,1, , ,2, , ,3, ( ) 1,..,{{ , , } }k j k j k j k j′ ′ ′ ′∈ =′ ′ ′J J J K JI I I characterizing the 1d − dimensional critical sets of the pLP.

2: If the pLP is unbounded then we have reached a boundary of domV . Stop. Else 3: for 1,...,j =

Page 16: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

16

4: for ( )k ′∈ K J Compute a dx belonging to the relative interior of:

, ,inf { ( , ) | }k j k k jV h ′′′ = ∈dx dx dp dx JD

where 1

2

, , ,,

, , ,

, ,

,

k ki i k j

k j k ki i k j

i

i

′′

⎧ ⎫′= ∈⎪ ⎪= ⎨ ⎬′∈⎪ ⎪⎩ ⎭

A B

A B

dx dpdx

dx dpJJ

J

ID

I

and determine 2, , , , ,{ | }k kk j k j i ii′ ′′′ ′= ∈ =A Bdx dpJ JI I

5: end for 6: Let ( ) ,minj k jka V′∈ ′′K J , ,{ ( ) | }j k j jk V a′ ′′∈ =L K J . 7:According to theorem 6 adj( )j jK J L and 1, , , , ,( )jk k j k jadj ′ ′′ ′′J JI J I I∪ , adj( )jk ∈K J define a full dimensional critical region whose closure is adjacent to JR along facet F . 8: end for 9: end if

Some comments regarding the adjacency oracle are in order. In step 1, the parametric linear program (pLP) appearing in (20) is solved along the facet using the graph traversal algorithm for pLPs in Patrinos & Sarimveis (2010). This is done by first projecting the problem on the affine hull of F . Thus, the full dimensional critical regions of (20) in this projected space, are actually 1d − polyhedral sets that partition F . Each of the critical regions of the pLP is determined by a set of triplets , ,1, , ,2, , ,3, ( ){ , , }k j k j k j k′ ′ ′ ′∈′ ′ ′J J J K JI I I which characterize relative interior solutions of the pLP. This is exactly the information obtained by the algorithm of Patrinos & Sarimveis (2010). For each critical region of the pLP, the quadratic programs appearing in (24) are solved in order to determine the graphical derivative and, invoking theorem 6, the set adj

jJ that characterizes the adjacent critical regions. Notice that these quadratic programs are parameter-free. Furthermore, the cardinality of ( )′K J is very small compared to the total number of pieces of the piecewise quadratic function f , hence only a very small number of QPs needs to be solved. Therefore, for the implementation of the adjacency oracle a parametric linear solver and a standard QP solver are required; there is no need for a convex nonsmooth optimization algorithm. In addition, if the solution set of the pLP is single-valued, then there is no need for solving a quadratic program at all. Finally, due to remark 4, the computations are further simplified for strictly convex parametric piecewise quadratic optimization problems.

The goal of parametric optimization is to enumerate the closures of all full-dimensional critical regions. In accordance to the graph traversal technique the problem under investigation can be identified with the following graph.

Definition 6. Let V the set of full dimensional critical regions and E consist of edges connecting each pair of adjacent critical regions. The graph ( , )G V E is called the critical region graph.

Based on Algorithm 1, algorithm 2 enumerates all full-dimensional critical regions of a parametric piecewise quadratic optimization problem using the graph traversal paradigm.

Algorithm 2: Graph Traversal Algorithm for convex parametric piecewise quadratic optimization problems 1: compute an initial index set 0J 2: initialize the list of unexplored index sets: { }0←U J

3: initialize the list of discovered index sets:

{ }0←L J 4. while ≠ ∅U do

5: remove any J from U 6: compute minimal, normalized representation of region JR 7: for all facets F of JR do 8: Compute active sets 1{ }j j=J defining adjacent critical regions on F using algorithm 1. 9: for all jJ do 10:if jJ is not in L then 11: { }j←L L J∪ 12: { }j←U U J∪ 13:end if 14:end for 15:end for 16:end while

The validity of algorithm 2 follows by the following corollary.

Page 17: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

17

Corollary 2. The critical region graph is connected. Proof. This follows from the convexity of domV , which implies that between every pair of critical regions there exists a path in the critical region graph.

7. Finite horizon ∞H optimal control for constrained linear systems

Consider the discrete-time linear system

1k k k k+ = + +A B Gx x u w (25)

k k k

= +C Dz x u (26)

where xnk ∈x R is the state vector, un

k ∈u R is the control vector, wnk ∈w R is the disturbance, zn

k ∈z R is the controlled output and the matrices , , , ,A B G C D are of compatible dimensions. The system is subject to hard state and control constraints, i.e. X∈x and U∈u , where ,X U are polytopes containing the origin. The disturbance belongs to a polytope W that contains the origin.

Let 0 1 1

{ ( ), ( ),..., ( )}N

π μ μ μ −= ⋅ ⋅ ⋅ denote a control policy (sequence of state feedback control laws) over horizon N and let 0 1 1{ , ,..., }N −=w w w w denote a sequence of disturbances. Let ( , , )f + +A B Gx u w x u w . Also let ( ; , )kφ π,wx denote the solution of (25) when the state at time 0 is x , the control policy is π and the disturbance sequence is w , i.e.

( ; , )kφ π,wx is the solution at time k of

1 ( , ( ), )k k k k kf μ+ =x x x w , 0

=x x (27)

The cost ( , , )NV π wx corresponding to initial state x , control policy π and disturbance sequence w , is 1

0( , , ) ( , , ) ( )

NN k k k f NkV Vπ

== +∑wx x u w x (28)

where 2 2 22 2( , , ) γ−x u w z w and for all k , ( ; , )k kφ π,= wx x and ( )k k kμ=u x , k k k= +C Dz x u , fV is the

terminal cost and 0γ > . The optimal control problem that we consider is:

( )( ) : ( ) inf sup ( , , )NN N NV Vπ πΠ∈ ∈= w wxx x xWP

where NWW , is the set of admissible disturbance sequences and NΠ is the set-valued mapping that maps initial states to admissible control policies, i.e.

( ; , ) , [0, 1]

( ) ( ( ; , )) , [0, 1]

( ; , ) , N k

f

k X k N

k U k N

N X

φμ φφ

π,Π π π,

π,

⎧ ⎫∈ ∈ −⎪ ⎪

= ∈ ∈ −⎨ ⎬⎪ ⎪∈ ∀ ∈⎩ ⎭

www w

xx x

x W (29)

where fX is the terminal set.

The solution to ( )N xP can be obtained via dynamic programming as follows:

1( , ) sup{ ( , , ) ( ( , , ))}k k

WJ V f−

∈= +w

x u x u w x u w (30)

( ) inf ( , ), ( ) argmin ( , )k k k kV J Jκ= =u ux x u x x u (31)

for 1,...,k N= , with terminal conditions 0 fV V and 0 fXX . Let 1{( , ) | ( , , ) , }k kX U f W−= ∈ × ∈ ∀ ∈x u x u w wZ X and { },( , )k kX U∈ ∃ ∈ ∈x u x uX Z for [1, ]k N∈ . Then : xn

kV →R R with dom k kV = X and : x un nkJ × →R R R with dom k kJ = Z , [0, ]k N∈ . Here, kκ denotes the optimal

control law N kμ − at time N k− . After computing off-line the explicit solution of NP , one can in principle apply the time-invariant receding horizon

control law Nκ to the system in real time. Choices for the terminal cost and terminal set such that NX is robust positively invariant for the closed loop system ( , ( ), )Nf κ+ =x x x w and the finite 2 -gain property from the disturbance to the controlled output holds, are given in Mayne et al. (2006), section 7.3.

Page 18: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

18

The difficulty of the solvability of the dynamic programming equations lies in the fact that 1( , ) ( , ) ( ( , ))kV V f−= +z w z w z w should be concave in w for any ( , )z x u and any [1, ]k N∈ . Then (31) is a parametric

piecewise quadratic optimization problem and can be readily solved by algorithm 2. However, choosing an appropriate value for γ in order to accomplish this is not an easy task, except from the following situation.

Lemma 1. Consider ( , ) ( , ) ( ( , ))V V f= +z w z w z w with : nV →R R , proper, convex, PWQ and continuously differentiable. Then ( , )V ⋅z is concave if and only if 2

iγ ′I G QG , where iQ are the hessian matrices of the pieces of V .

Proof. Sufficiency is proved in Mayne et al. (2006), proposition 3. In order to prove necessity, notice that ( ),V ⋅z is PWQ and continuously differentiable. Notice that 21

2( , ) ( ) ( ) ( )

i i iV cγ′ ′ ′= − − + +I G QGz w w w b z w z . A continuously

differentiable PWQ function is convex if and only if is convex in each open region where it is quadratic (Chen, Madsen, Zhang, 2005), and the claim follows.

In Mayne et al. (2006), Theorem 5 the following claim is proved. Consider the problem ( ) sup ( , )WJ V∈= wz z w where ( , ) ( , ) ( ( , ))V V f= +z w z w z w with : nV →R R strictly convex and PWQ and continuously differentiable. Then ( , )V ⋅ w

is strictly convex for any w , and there exists a 0γ > such that ( , )V ⋅z is strictly concave. Consequently, J is proper, strictly convex and PWQ. Therefore, the stringent assumption of smoothness of the value function is imposed in that work in order to characterize the solution of the constrained ∞H problem. However, this assumption is very restrictive, since nonsmoothness of the value function is the intrinsic feature of almost every constrained parametric optimization problem. We will next show how one can remove this restrictive assumption by closely approximating V with its Moreau envelope.

7.1. Moreau envelopes and smoothing of convex PWQ functions

For a proper, lsc function : nf →R R

and a parameter value 0λ > , the Moreau envelope (or Moreau Yosida regularization) e fλ (Rockafellar & Wets, 2009, definition 1.22) is defined by:

1 2( ) inf { ( ) (2 ) }e f fλ λ −+ −yx y y x

We have ( ) ( )e f fλ x x for all x as 0λ (Rockafellar & Wets, 2009, theorem 1.25). If : nf →R R is lsc, proper

and convex, then the Moreau envelope e fλ is convex and continuously differentiable for any 0λ > (Rockafellar & Wets, 2009, theorem 2.26). Furthermore, if : nf →R R

is proper, convex and PWQ, then e fλ is PWQ (Rockafellar & Wets,

2009, proposition 12.30).

In fact, 1 2( ) inf { ( ) (2 ) }e f fλ λ −= + −yx y y x is a convex parametric piecewise quadratic optimization problem, with x being the parameter vector and y the optimization vector. Therefore, one can compute an arbitrarily close smooth PWQ approximation of a convex PWQ function by computing its Moreau envelope by Algorithm 2. An example is presented in figure 1 (all figures in the paper are drawn using Multi-Parametric Toolbox, Kvasnica, Grieder, Baotić & Morari, 2003) . A useful conclusion drawn based on extensive simulations is that the complexity of the Moreau envelope (in terms of quadratic pieces) is the same as that of f , or slightly higher.

 

Figure 1 Moreau envelope of a nonsmooth PWQ. Top left figure shows the nonsmooth PWQ, top right shows the discontinuity of the gradient, bottom left shows the continuous gradient of e fλ and bottom right shows the relative error, i.e. ( ( ) ( )) / ( )f e f fλ−x x x .

-10 -50

510

-20-10

010

0

50

100

150

200

250

300

x1

f(x)

x2-10

-5 05

10

-10-5

05

10

-100

0

100

x1

"Gradient" of f(x)

x2

f

-10-5 0

510

-10-5

05

10

-50

0

50

x1

Gradient of eλf(x) for λ = 1e-008

x2

f

-10-5

05

10

-10-5

05

10

2

2.5

3

3.5

4

x 10-8

Relative error for λ = 1e-008

(f(x

)-e λf

(x))

/f(x)

Page 19: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

19

7.2. Constrained ∞H optimal control algorithm

We will say that the constrained ∞H problem is γ -solvable if γ is such that ( , )kV ⋅z is concave, where 1( , ) ( , ) ( ( , ))k kV V f−= +z w z w z w , for each [1, ]k N∈ . Consider a γ such that the constrained ∞H problem is γ -solvable.

Then algorithm 3 gives an arbitrarily close to optimal policy for the constrained ∞H problem.

Algorithm 3: Constrained Finite Horizon ∞H optimal control Input: System matrices , , , ,A B G C D , constraint sets , ,X U W , terminal set fX , terminal cost fV , horizon N , parameter λ . 1: 0 fV V← , 0 fX←X 2: for 1,...,k N= 3: Solve 1( , ) sup { ( , , ) ( ( , , ))}k W kJ V f∈ −= +wx u x u w x u w 4: Solve ( ) inf ( , )k kV J= ux x u , ( ) arg min ( , )k kJκ = ux x u 5: if kV is nonsmooth then 6: 1 2( ) inf { ( ) (2 ) }k ke V Vλ λ −= + −yx y y x 7: k kV e Vλ← 8: else 9: k kV V← 10: end if 11: end for

Notice that checking if kV is smooth is performed automatically by Algorithm 2 since ( ) { | ( , 0) ( , )}k kV J∂ ∂= ∈x y y x u for any argmin ( , )kJ∈ uu x u (see proposition 8 and Theorem 10.13 in Rockafellar & Wets, 2009) and smoothness is equivalent to kV∂ being single-valued for any dom kV∈x (Corollary 9.19 in Rockafellar & Wets, 2009).

7.3. Illustrative example

The example is taken from Kerrigan, Maciejowski (2004):

1 0.8

0 0.7

⎡ ⎤= ⎢ ⎥

⎢ ⎥⎣ ⎦A ,

0

1

⎡ ⎤= ⎢ ⎥

⎢ ⎥⎣ ⎦B ,

2=G I , { }10X

∞= x x , { } 3U = u u , { }0.1W

∞= x w , 10N = ,

2=Q I ,

0.1=R .

The terminal cost ( )f fV ′= Px x x and terminal controller ( )f fκ = Kx x for the unconstrained system are computed by solving the LMI problem 2.265 appearing in Kwon & Han (2005) and the terminal set is the maximal robust positively invariant set for ( )f

+ = + +A BK Gx x w . Through a trial and error procedure, a value of γ for which the problem is γ -solvable is 5.19γ = . Running algorithm 3, the receding horizon controller is defined over 63 critical regions as shown in figure 2.

 

Figure 2 Polyhedral decomposition of N

κ for the example

8. Conclusions

In this paper the problem of parametric optimization for convex piecewise quadratic functions was studied. Parametric optimization problems with strictly convex piecewise quadratic functions were studied also in Mayne et al. (2006).

-10 -5 0 5 10-10

-8

-6

-4

-2

0

2

4

6

8

10

x1

x 2

Page 20: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

20

However, in this work we study problems where the objective function can be merely convex, allowing for the solution map to be multivalued and we follow an entirely different route of analysis based on graphical differentiation of set-valued mappings. On a theoretical perspective, formulas for the graphical derivative of the solution map were proved. Based on these formulas, algorithms are derived for calculating all adjacent full dimensional critical regions to a facet of an already discovered critical region, without assuming any constraint qualification, extending the work in Patrinos & Sarimveis (2010). This result is important since when coupled to the graph traversal paradigm for the exploration of the parameter space, all full dimensional critical regions are discovered. Thus, unlike Mayne et al. (2006) where the exploration of the parameter space is performed using a reverse transformation procedure (the so called region complement technique), the algorithm presented in this paper is based on the graph traversal technique. It is well known that graph traversal algorithms have compelling advantages compared to algorithms based on the region complement technique, cf. Jones (2005), Jones, Kerrigan & Maciejowski (2007), Jones, Baric & Morari (2007), Columbano et al. (2009). In fact the graph traversal algorithm is the only known method that achieves output sensitivity for parametric optimization.

The paper ends with an application in constrained finite horizon ∞H optimal control problem where the continuous differentiability assumption in Mayne et al. (2006) of the value functions is relaxed by constructing smooth approximations using Moreau envelopes.

Appendix

Let { | }kC k ∈C K be a polyhedral decomposition of nD ⊆ R . Let ( ) { | dim( ) 1}k C n∈ = −N F F∩K for

1( )n kC−∈F F , k ∈K , i.e. ( )kN F is the index set of polyhedral sets that are adjacent to kC along facet F . This information is provided by the adjacency list of the graph of the polyhedral decomposition and can be obtained at almost no computational cost when solving a parametric optimization problem. Then, the following algorithm transforms the polyhedral decomposition into a polyhedral subdivision by subdividing appropriately only polyhedral sets for which the facet-to-facet property is violated. An example is presented in figure 3. Algorithm 4: Decomposition to Subdivision Input: Polyhedral decomposition { | }kC k= ∈C K Output: Polyhedral subdivision { | }kC k′ ′ ′= ∈C K

1: ′ = ∅C 2: for each k ∈K 3: if kC violates facet-to-facet property 4: 0 int_point( )kC←x //Calculate a point in the interior of kC 5: for each ( )kC∈F F 6: if | ( ) | 1k >N F //facet-to-facet violation along facet F 7: for each ( )k∈ N F 8: 0conv({ } ( ))kC C′ ′← x∪ ∪ ∩C C 9: end for 10:else 11: 0conv({ } )′ ′← x F∪ ∪C C 12: end if 13:end for 14: else 15: kC′ ′← ∪C C 16:end if 17:end for  

Page 21: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

21

 Figure 3 Conversion of a polyhedral decomposition to a polyhedral subdivision

Acknowledgements

The work of the first author was financially supported by the National Scholarship Foundation of Greece and the NTUA Senator Committee of Basic Research Program 65/1631.

References

Alessio, A., & Bemporad, A. (2008) A survey on explicit model predictive control, in Nonlinear Model Predictive Control: Towards New Challenging Applications, D.M. Raimondo L. Magni, F. Allgower, Ed., Berlin Heidelberg, 2009, vol. 384 of Lecture Notes in Control and Information Sciences, pp. 345–369, Springer-Verlag..

Aubin, J.-P. (1981). Contingent derivatives of set-valued maps and existence of solutions to nonlinear inclusions and differential inclusions, in Mathematical applications, Part A, ed. L. Nachbin, pp. 159-229, Academic Press, New York-London.

Aubin, J.-P., & Frankowska, H. (1990) Set-Valued Analysis. Boston: Birkhäuser. Bank, B., Guddat, J., Klatte, D., Kummer, B & Tammer, K. (1983) Non-Linear Parametric Optimization. Birkhauser, Basel-Boston. Baotić, M., Christophersen, F.J, Morari, M. (2006) Constrained optimal control of hybrid systems with a linear performance index , IEEE

Transactions on Automatic Control 51, 12, 1903-1919. Bemporad, A., Borrelli, F. & Morari, M. (2002) Model predictive control based on linear programming—the explicit solution. IEEE

Transactions on Automatic Control 47(12), 1974–1985. Bemporad, A., Borrelli, F. & Morari, M. (2003) Min–max control of constrained uncertain discrete-time linear systems, IEEE

Transactions on Automatic Control 48, 9, 1600–1606. Bemporad, M. Morari, V. Dua, and E.N. Pistikopoulos. (2002) The explicit linear quadratic regulator for constrained systems,

Automatica, 38, 3–20. Bertsekas, D.P., Nedic, E. & Ozdaglar, A.(2003) Convex Analysis and Optimization, Athena Scientific. Borrelli, F. (2003) Constrained Optimal Control of Linear and Hybrid Systems, ser. Lecture Notes in Control and Information Sciences.

Springer-Verlag, vol. 290. Borrelli, F., Baotić, M., Bemporad, A., & Morari, M. (2005). Dynamic programming for constrained optimal control of discrete-time

linear hybrid systems, Automatica, 41, 1709-1721. Borrelli, F., Bemporad, A., & Morari, M. (2003). A geometric algorithm for multi-parametric linear programming. Journal of

Optimization Theory and Applications, 118(3), 515–540. Christophersen, F.J, Baotić, M., Morari, M. (2005) Optimal control of piecewise affine systems: A dynamic programming approach,

Lecture Notes in Control and Information Sciences 322, pp. 183-198. Chen, B.T., Madsen, K. & Zhang, S. (2005) On the characterization of quadratic splines. Journal of Optimization Theory and

Applications, 124(1), 93-111. Diehl, M. & Bjornberg, J. (2004). Robust dynamic programming for min–max model predictive control of constrained uncertain systems,

IEEE Transactions on Automatic Control, 49, 12, 2253–2257. Dontchev, A.L. & Rockafellar, R.T. (2001a) Ample parameterization of variational inclusions, SIAM J. Optimization, 12 , 170-187. Dontchev, A.L., Rockafellar, R.T. (2001b) Primal-dual solution perturbations in convex optimization, Set-Valued Analysis, 9, 49-65. Dontchev, A.L. & Rockafellar, R.T. (2009) Implicit Functions and Solution Mappings: A view from Variational Analysis, Springer Series

in Mathematics Ewald, G. (1996), Combinatorial convexity and algebraic geometry. NewYork: Springer-Verlag. Facchinei, F. & Pang, J.S. (2003) Finite Dimensional Variational Inequalities and Complementarity Problems, vol. I, Springer, New

York. Fiacco, A.V. (1983) Introduction to sensitivity and stability analysis in nonlinear programming. Academic Press, London. Columbano, S., Fukuda, K. &Jones, C.N. , (2009) An Output-Sensitive Algorithm for Multi-Parametric LCPs with Sufficient Matrices,

CRM Proceedings and Lecture Notes, vol. 48. Grünbaum, B. (2003) Convex Polytopes,, revised edition (Kaibel, V., Klee, V. & Ziegler, G.M. eds.), Graduate Texts in Mathematics 221,

Springer-Verlag, New York. Jones, C.N. (2005) Polyhedral Tools for Control. PhD thesis, University of Cambridge. Jones, C.N., Baric, M., & Morari, M. (2007) Multiparametric linear programming with applications to control. European Journal of

Control, 13:152–170. Jones, C.N., Kerrigan, E.C., & Maciejowski, J.M. (2007) Lexicographic perturbation for multiparametric linear programming with

applications to control. Automatica, 43(10):1808–1816. Kerrigan, E.C. & Maciejowski, J.M. (2004) Feedback min-max model predictive control using a single linear program: Robust stability

and the explicit solution Int. J. Robust Nonlinear Control 2004; 14:395–413

-1.5 -1 -0.5 0 0.5 1 1.5-1.5

-1

-0.5

0

0.5

1

1.5

x1

x 2

Polyhedral Decomposition

-1.5 -1 -0.5 0 0.5 1 1.5-1.5

-1

-0.5

0

0.5

1

1.5

x1

x 2

Polyhedral Subdivision

Page 22: Convex parametric piecewise quadratic optimization: Theory, … · 2019-09-26 · 1 Convex parametric piecewise quadratic optimization: Theory, Algorithms and Control Applications

22

Kerrigan, E. C. & Mayne, D. Q. (2002) Optimal control of constrained, piecewise affine systems with bounded disturbances, in Proc. 42th IEEE Conf. Decision Control, Las Vegas, NV, Dec. 2002, pp. 1552–1557.

Klatte, D. & Kummer, B. (2002) Nonsmooth Equations in Optimization: Regularity, Calculus, Methods and Applications. Kluwer, Dordrecht-Boston-London.

Kvasnica, M., Grieder, P. Baotić, M. & Morari, M. (2003) Multi Parametric Toolbox (MPT). In Hybrid Systems: Computation and Control, Lecture Notes in Computer Science, Volume 2993, pages 448– 462, Pennsylvania, Philadelphia, USA, March 2003. Springer Verlag. http://control.ee.ethz.ch/˜mpt.

Levy A.B. & Rockafellar, R.T. (1994) Sensitivity analysis of solutions to generalized equations, Transactions of American Mathematical Society, 345, 661–671.

Levy A.B. & Rockafellar, R.T. (1995) Sensitivity of solutions in nonlinear programming problems with nonunique multipliers, in Recent Advances in Nonsmooth Optimization, D.-Z. Du, L. Qi, and R. S. Womersley, eds., World Scientific, 215–223.

Levy A.B. & Rockafellar, R.T. (1996) Variational conditions and the proto-differentiation of partial subgradient mappings. Nonlinear Analysis, 26, 1951–1964.

Levy, A.B. (2001) Solution sensitivity from general principles. SIAM Journal Control and Optimization, 40, 1-38. Louveaux, F.V. (1978) Piecewise Convex Programs, Mathematical Programming. 15, 53-62. Lucet, Y., Bauschke, H.H. & Trienis, M. (2009) The piecewise linear quadratic model for computational convex analysis. Computational

Optimization and Applications, 43(1), 95-118. Mayne D.Q, Raković, S.V. , Vinter R.B., Kerrigan E.C. (2006) Characterization of the solution to a constrained H∞ optimal control

problem. Automatica, 42, 371–382. Mayne, D.Q., Raković, S.V. & Kerrigan, E.C. (2007)

Optimal Control and Piecewise Parametric Programming, In Proceedings of the European Control Conference 2007 Kos, Greece.

Patrinos, P., Sarimveis, H. (2010) A new algorithm for solving convex parametric quadratic programs based on graphical derivatives of solution set mappings, accepted to Automatica.

Pistikopoulos E.N., Georgiadis M.C. & Dua V. (2007a) Multiparametric Programming, Process Systems Engineering, Vol. 1. Weinheim, Germany: Wiley-VCH.

Pistikopoulos E.N., Georgiadis M.C. & Dua V. (2007b) Multiparametric Model-Based Control, Process Systems Engineering, Vol. 2. Weinheim, Germany: Wiley-VCH.

Ralph, D. & Dempe, S. (1995) Directional differentiability of the solution of a parametric nonlinear program, Mathematical Programming, 70, 159-172.

Rawlings, J.B. & Mayne D.Q. (2009) Model predictive control: Theory and Design. Nob Hill Publishings. Robinson, S.M. (1980) Stronly regular generalized equations. Mathematics of Operations Research, 5, 43–62. Robinson, S.M. (1981) Some continuity properties of polyhedral multifunctions. Mathematical Programming Study 14, 206–214. Rockafellar, R.T. (1970). Convex Analysis, Princeton University Press. Rockafellar, R. T. (1988), First and second-order epi-differentiability in nonlinear programming, Transactions of the American

Mathematical Society, 307, 75–108. Rockafellar, R.T. & Wets, R.J. (2009) Variational Analysis. Springer-Verlag, Berlin, 3rd corrected printing. Scholtes, S. (1994) Introduction to Piecewise Differentiable Equations, Habilitation thesis, University of Karlsruhe, Karlsruhe, Germany. Spjøtvold J., Tøndel P. & Johansen T.A. (2007) Continuous selection and unique polyhedral representation of solutions to convex

parametric quadratic programs. Journal of Optimization Theory and Applications, 133, 177–189. Spjøtvold, J., Kerrigan, E.C., Mayne, D.Q., Johansen, T.A. (2009) Inf-sup control of discontinuous piecewise affine systems, International

Journal of Robust and Nonlinear Control, 19 (13), 1471-1492. Sun, J. (1992) On the structure of convex piecewise quadratic functions, Journal of Optimization Theory & Applications, 72(3), 499-510. Tøndel, P., Johansen, T.A. & Bemporad, A. (2003) An algorithm for multi-parametric quadratic programming and explicit MPC

solutions. Automatica, 39(3), 489–497.