Table Of ContentSUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 1
Separable and Localized System Level Synthesis for
Large-Scale Systems
Yuh-Shyang Wang, Member, IEEE, Nikolai Matni, Member, IEEE, and John C. Doyle
Abstract—Amajorchallengefacedinthedesignoflarge-scale to denote the space of finite impulse response (FIR) transfer
cyber-physical systems, such as power systems, the Internet of matrices with horizon T, i.e., F := {G ∈ RH |G =
T ∞
7 Things or intelligent transportation systems, is that traditional (cid:80)T 1G[i]}.
distributed optimal control methods do not scale gracefully, i=0 zi
1 LetZ+bethesetofallpositiveintegers.Weusecalligraphic
neither in controller synthesis nor in controller implementation,
0 to systems composed of millions, billions or even trillions of lower case letters such as r and c to denote subsets of Z+.
2
interactingsubsystems.Thispapershowsthatthischallengecan Consider a transfer matrix Φ with n rows and n columns.
r c
n now be addressed by leveraging the recently introduced System Letr beasubsetof{1,...,n }andc asubsetof{1,...,n }.
a LevelApproach(SLA)tocontrollersynthesis.Inparticular,inthe r c
J contextoftheSLA,wedefinesuitablenotionsofseparabilityfor We use Φ(r,c) to denote the submatrix of Φ by selecting
the rows according to the set r and columns according to the
0 control objective functions and system constraints such that the
2 globaloptimizationproblem(oriterateupdateproblemsofadis- set c. We use the symbol : to denote the set of all rows or
tributedoptimizationalgorithm)canbedecomposedintoparallel all columns, i.e., we have Φ = Φ(:,:). Let {c ,...c } be a
1 p
] subproblems.Wethenfurthershowthatifadditionallocality(i.e., partition of the set {1,...,n }. Then {Φ(:,c ),...,Φ(:,c )}
C sparsity)constraintsareimposed,thenthesesubproblemscanbe c 1 p
is a column-wise partition of the transfer matrix Φ.
O solved using local models and local decision variables. The SLA
is essential to maintaining the convexity of the aforementioned
.
h problems under locality constraints. As a consequence, the I. INTRODUCTION
t resulting synthesis methods have O(1) complexity relative to the
a size of the global system. We further show that many optimal LARGE-SCALE networked systems have emerged in ex-
m
control problems of interest, such as (localized) LQR and LQG, tremely diverse application areas, with examples includ-
[ Han2do(ploticmalailzecdo)ntmroilxwedithHj2o/inLt1aoctputaimtoarlacnodntsreonlsoprrorbegleumlasr,izsaattiiosfny, ing the Internet of Things, the smart grid, automated highway
1 these notions of separability, and use these problems to explore systems, software-defined networks, and biological networks
v tradeoffs in performance, actuator and sensing density, and inscienceandmedicine.Thescaleofthesesystemsposesfun-
0 average versus worst-case performance for a large-scale power damental challenges to controller design: simple locally tuned
8 inspired system. controllers cannot guarantee optimal performance (or at times
8
5 Index Terms—Large-scale systems, constrained & structured even stability), whereas a traditional centralized controller is
0 optimal control, decentralized control, system level synthesis neither scalable to compute nor physically implementable.
. Specifically, the synthesis of a centralized optimal controller
1
0 PRELIMINARIES&NOTATION requires solving a large-scale optimization problem that relies
7 ontheglobalplantmodel.Inaddition,informationinthecon-
1 We use lower and upper case Latin letters such as x and trol network is assumed to be exchanged instantaneously. For
: A to denote vectors and matrices, respectively, and lower and
v large systems, the computational burden of computing a cen-
i upper case boldface Latin letters such as x and G to denote tralized controller, as well as the degradation in performance
X
signalsandtransfermatrices,respectively.Weusecalligraphic
due to communication delays between sensors, actuators and
r letters such as S to denote sets.
a controllers, make a centralized scheme unappealing.
We work with discrete time linear time invariant systems.
The field of distributed (decentralized) optimal control de-
We use standard definitions of the Hardy spaces H and
2 veloped to address these issues, with an initial focus placed
H , and denote their restriction to the set of real-rational
∞ onexplicitlyincorporatingcommunicationconstraintsbetween
proper transfer matrices by RH and RH . We use G[i] to
2 ∞ sensors, actuators, and controllers into the control design
denote the ith spectral component of a transfer function G,
process. These communication delays are typically imposed
i.e., G(z) = (cid:80)∞ 1G[i] for |z| > 1. Finally, we use F
i=0 zi T as information sharing constraints on the set of admissible
controllers in the resulting optimal control problem. The
The authors are with the department of Control and Dynamical Sys-
additional constraint makes the distributed optimal control
tems, California Institute of Technology, Pasadena, CA 91125, USA
({yswang,nmatni,doyle}@caltech.edu). problemsignificantlyharderthanitsunconstrainedcentralized
ThisresearchwasinpartsupportedbyNSFNetSE,AFOSR,theInstitute counterpart, and certain constrained optimal control problems
for Collaborative Biotechnologies through grant W911NF-09-0001 from the
are indeed NP-hard [5], [6]. It has been shown that in
U.S. Army Research Office, and from MURIs “Scalable, Data-Driven, and
Provably-CorrectAnalysisofNetworks”(ONR)and“ToolsfortheAnalysis the model-matching framework, distributed optimal control
andDesignofComplexMulti-ScaleNetworks”(ARO).Thecontentdoesnot problems admit a convex reformulation in the Youla domain
necessarilyreflectthepositionorthepolicyoftheGovernment,andnoofficial
if [7] and only if [8] the information sharing constraint is
endorsementshouldbeinferred.
Preliminaryversionsofthisworkhaveappearedin[1]–[4]. quadratically invariant (QI) with respect to the plant. With the
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 2
identificationofquadraticinvarianceasameansofconvexify- thesized in a localized way, i.e., by solving (iterate) sub-
ingthedistributedoptimalcontrolproblem,tractablesolutions problems of size scaling as O(1) relative to the dimension of
forvarioustypesofdistributedobjectivesandconstraintshave the global system. Specifically, we first introduce the notion
been developed [9]–[15]. of separability of the objective functional g(·) and the convex
As promising and impressive as all of the results have set constraint S such that the global optimization problem (1)
been,QIfocusesprimarilyonidentifyingthetractability(e.g., can be solved via parallel computation. We then show that
the convexity) of the constrained optimal control problem, if additional locality (sparsity) constraints are imposed, then
and places less emphasis on the scalability of synthesizing these subproblems can be solved in a localized way. We show
andimplementingtheresultingconstrainedoptimalcontroller. that such locality and separability conditions are satisfied by
In particular, for a strongly connected networked system,1 manyproblemsofinterest,suchastheLocalizedLQR(LLQR)
quadratic invariance is only satisfied if each sub-controller and Localized Distributed Kalman Filter (LDKF) [1], [2]. We
communicates its locally acquired information with every then introduce the weaker notion of partially separable objec-
other sub-controller in the system – this density in communi- tives and constraints sets, and show that this allows for iterate
cationhasnegativeconsequencesonthescalabilityofboththe subproblemsofdistributedoptimizationtechniquessuchasthe
synthesis and implementation of such distributed controllers. alternating direction method of multipliers (ADMM) [23] to
To address these scalability issues, techniques based on be solved via parallel computation. Similarly to the separable
regularization[16],[17],convexapproximation[18]–[20],and case, when additional locality constraints are imposed, the
spatial truncation [21] have been used to find sparse (static) iterate subproblems can further be solved in a localized way.
feedback controllers that are scalable to implement (i.e., local We show that a large class of SLS problems are partially
control actions can be computed using a local subset of separable. Examples include the (localized) mixed H /L
2 1
global system state). These methods have been successful in optimalcontrolproblemandthe(localized)H optimalcontrol
2
extendingthesizeofsystemsforwhichadistributedcontroller problem with sensor and actuator regularization [4], [24].
can be implemented and computed, but there is still a limit The rest of the paper is organized as follows. In Section II,
to their scalability as they rely on an underlying centralized we formally define networked systems as a set of interacting
synthesis procedure. discrete-time LTI systems, recall the distributed optimal con-
In our companion paper [22], we proposed a System Level trol formulation and comment on its scalability limitations,
Approach (SLA) to controller synthesis, and showed it to be and summarize the SLA to controller synthesis defined in
a generalization of the distributed optimal control framework our companion paper [22]. In Section III, we introduce the
studied in the literature. The key idea is to directly design the class of column/row-wise separable problems as a special
entire response of the feedback system first, and then use this case of (1), and show that such problems decompose into
systemresponsetoconstructaninternallystabilizingcontroller subproblems that can be solved in parallel. We then extend
that achieves the desired closed loop behavior. Specifically, the discussion to the general case in Section IV, where we
the constrained optimal control problem is generalized to the define the class of partially separable problems. When the
following System Level Synthesis (SLS) problem: problemispartiallyseparable,weusedistributedoptimization
combined with the techniques introduced in Section III to
minimize g(Φ) (1a)
solve (1) in a scalable way. We also give many examples
Φ
of partially separable optimal control problems, and introduce
subject to Φ stable and achievable (1b)
somespecializationsandvariantsoftheoptimizationalgorithm
Φ∈S, (1c)
inthesamesection.SimulationresultsareshowninSectionV
where Φ is the system response from disturbance to state to demonstrate the effectiveness of our method. Specifically,
and control action (formally defined in Section II), g(·) is a we synthesize a localized H2 optimal controller for a power-
functional that quantifies the performance of the closed loop inspired system with 12800 states composed of dynamically
coupled randomized heterogeneous subsystems, demonstrate
response, the first constraint ensures that an internally stabilz-
ingcontrollerexiststhatachievesthesynthesizedresponseΦ, the optimal co-design of localized H2 controller with sensor
and S is a set. An important consequence of this approach to actuator placement, and solve a mixed H2/L1 optimal con-
trol problem. Conclusions and future research directions are
constrained controller synthesis is that for any system, convex
summarized in Section VI.
constraints can always be used to impose sparse structure
on the resulting controller. If the resulting SLS problem is
feasible, it follows that the resulting controller is scalable to
implement, as this sparse structure in the controller implies II. PRELIMINARIESANDPROBLEMSTATEMENT
that only local information (e.g., measurements and controller
We first introduce the networked system model that we
internal state) need to be collected and exchanged to compute
consider in this paper. We then explain why the traditional
local control laws.
distributed optimal control problem, as studied in the QI
In this paper, we show that the SLA developed in [22]
literature, does not scale gracefully to large systems. We then
further allows for constrained optimal controllers to be syn-
recall the SLA to controller synthesis defined and analyzed
in [22]. This section ends with the formal problem statement
1A networked system is strongly connected if the state of any subsystem
caneventuallyalterthestateofallothersubsystems. considered in this paper.
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 3
Subsystem
A. Interconnected System Model
We consider a discrete-time LTI system of the form
…… ……
x[t+1] = Ax[t]+B2u[t]+δx[t] u1 y1 ui yi uj yj un yn
z¯[t] = C1x[t]+D12u[t] x1 …… xi xj …… xn
y[t] = C x[t]+δ [t] (2)
2 y
wherexistheglobalsystemstate,uistheglobalsysteminput, �x1 �xi �xj �xn
z¯ is the global controlled output, y is the global measured : physical interaction : sub-controller
output, and δx and δy are the global process and sensor : communication
: subsystem’s state
disturbances, respectively. We assume that these disturbances : sensing and actuation
are generated by an underlying disturbance w satisfying δ =
x
B w and δ = D w, allowing us to concisely describe the Fig.1. AnExampleofInterconnectedSystem
1 y 21
system in transfer function form as
A B1 B2 (cid:20)P P (cid:21) of A, B2 and C2 matrices of compatible dimensions, and δxi
P= C1 0 D12 = P11 P12 and δyi are the process and sensor disturbances, respectively.
C D 0 21 22
2 21
B. Distributed Optimal Control Framework
where P =C (zI−A)−1B +D . We use n ,n , and n
ij i j ij x y u
Interconnectedsystems(suchasthatdefinedinequation(3))
to denote the dimension of x,y, and u, respectively.
pose fundamental challenges to controller synthesis: imposing
We further assume that the global system is composed of
information exchange constraints between sub-controllers can
n dynamically coupled subsystems that interact with each
make the distributed optimal control problem non-convex,
other according to an interaction graph G = (V,E). Here
and even when these problems are tractable, the synthesis
V ={1,...,n} denotes the set of subsystems. We denote by
and implementation of the resulting optimal controller often
x ,u ,andy thestatevector,controlaction,andmeasurement
i i i
does not scale well to large systems. Here we briefly recall
of subsystem i – we further assume that the corresponding
the distributed optimal control framework, and illustrate why
state-space matrices (2) admit compatible block-wise parti-
scalability issues may arise when dealing with large systems.
tions. The set E ⊆ V ×V encodes the physical interaction
Consider a dynamic output feedback control law u=Ky.
between these subsystems – an edge (i,j) is in E if and only
The distributed optimal control problem of minimizing the
if the state x of subsystem j directly affects the state x of
j i
norm of the closed loop transfer matrix from external dis-
subsystem i.
turbance w to regulated output ¯z [7], [9], [10], [13]–[15] is
We also assume that the controller, which is the map-
commonly formulated as
ping from measurements to control actions, is composed
of physically distributed sub-controllers interconnected via a minimize ||P +P K(I−P K)−1P (cid:107) (4a)
11 12 22 21
communicationnetwork.Thiscommunicationnetworkdefines K
subject to K internally stabilizes P (4b)
theinformationexchangedelaysthatareimposedonthesetof
admissiblecontrollers,whichforthepurposesofthispaper,are K∈C (4c)
K
encodedviasubspaceconstraints(cf.[9]–[15]forexamplesof
where the constraint K ∈ C enforces information sharing
howinformationexchangedelayscanbeencodedviasubspace K
constraints between the sub-controllers. When C is a sub-
constraints). K
space constraint, it has been shown that (4) is convex if [7]
The objective of the optimal control problem (formally
and only if [8] C is quadratically invariant (QI) with respect
defined in the next subsection) is then to design a feedback K
to P , i.e., KP K ∈ C for all K ∈ C . Informally,
strategy from measurement y to control action u, subject 22 22 K K
quadratic invariance holds when sub-controllers are able to
to the aforementioned information exchange constraints, that
share information with each other at least as quickly as their
minimizes the norm of the closed loop transfer matrix from
control actions propagate through the plant [25].
the disturbance w to regulated output ¯z.
When the interaction graph of the system (2) is strongly
Example 1 (Chain topology): Figure 1 shows an example
connected, the transfer matrix P = C (zI − A)−1B
of such an interconnected system. The interaction graph of 22 2 2
is generically dense, regardless as to whether the matrices
this system is a chain, and further, as illustrated, local control
(A,B ,C )arestructuredornot.Insuchcases,theconditions
and disturbance inputs only directly affect local subsystems. 2 2
in [7] imply that any sparsity constraint C imposed on K
This allows us to write the dynamics of a subsystem i as K
in (4) does not satisfy the QI condition, and therefore (4)
(cid:88) cannot be reformulated in a convex manner in the Youla
x [t+1] = A x [t]+ A x [t]+B u [t]+δ [t]
i ii i ij j ii i xi domain [8]. In other words, for a strongly connected system,
j∈Ni
the distributed optimal control problem is convex only when
y [t] = C x [t]+δ [t], (3)
i ii i yi the transfer matrix K is dense, i.e., only when each local
where N = {j|(i,j) ∈ E} is the (incoming) neighbor set of measurement y is shared among all sub-controllers u in the
i i j
subsystem i, A ,A ,B ,C are suitably defined sub-blocks network. For such systems, neither controller synthesis nor
ii ij ii ii
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 4
implementation scales gracefully to large systems. Although Here we allow the state-space matrices A ,B ,C ,D of
K K K K
recent methods based on convex relaxations [18] can be the controller, as specified in (8), to instead be stable proper
used to solve certain cases of the non-convex optimal control transfer matrices R˜+,M˜,N˜,L in (7). As a result, we call the
problem (4) approximately for a sparse constraint set C , the variable β in (7b) the controller state.
K
underlying synthesis problem is still large-scale and does not With the parameterization of all stable achievable system
admitascalablereformulation.Theneedtoaddressscalability, response in (6a) - (6c), we now introduce a convex objective
both in the synthesis and implementation of a controller, is functional g(·) and an additional convex set constraint S
the driving motivation behind the SLA framework, which we imposed on the system response. We call g(·) a system level
recall in the next subsection. objective (SLO) and S a system level constraint (SLC). The
complete form of the SLS problem (1) is then given by
C. The System Level Approach
minimize g(R,M,N,L) (9a)
In our companion paper [22], we define and analyze the {R,M,N,L}
SLA to controller synthesis, defined in terms of three “system subject to (6a)−(6c) (9b)
level” (SL) components: SL Parameterizations (SLPs), SL (cid:20) (cid:21)
R N
Constraints (SLCs) and SL Synthesis (SLS) problems. We ∈S. (9c)
M L
showed that the SLS problem is a substantial generalization
of the distributed optimal control problem (4). We summarize Weshowedin[22]thattheSLSproblem(9)isageneralization
the key results that we build upon here: conceptually, the key of the distributed optimal control problem (4) in the follow-
idea is to reformulate the optimal control problem in terms of ing sense: (i) any quadratically invariant subspace constraint
the closed loop system response, as opposed to the controller imposed on the control problem (4c) can be formulated as a
itself. special case of a convex SLC in (9c), and (ii) the objective
For an LTI system with dynamics given by (2), we define (4a), which can be written as
a system response {R,M,N,L} to be the maps satisfying (cid:20) (cid:21)(cid:20) (cid:21)
(cid:20)x(cid:21) (cid:20)R N(cid:21)(cid:20)δ (cid:21) ||(cid:2)C1 D12(cid:3) MR NL DB1 ||, (10)
= x . (5) 21
u M L δ
y is a special case of a convex SLO in (9a). In particular, the
We call a system response {R,M,N,L} stable and achiev- SLS problem defines the broadest known class of constrained
able with respect to a plant P if there exists an internally optimal control problems that can be solved using convex
stabilizing controller K such that the control rule u = Ky programming [22].
leads to closed loop behavior consistent with (5). We showed
in[22]thattheparameterizationofallstableachievablesystem
D. Localized Implementation
responses {R,M,N,L} is defined by the following affine
space: In our prior work [1], [3], [22], we showed how to use the
(cid:20) (cid:21) SLC (9c) to design a controller that is scalable to implement.
(cid:2) (cid:3) R N (cid:2) (cid:3)
zI−A −B = I 0 (6a) We first express the SLC set S as the intersection of three
2 M L
convex set components: a locality (sparsity) constraint L, a
(cid:20) (cid:21)(cid:20) (cid:21) (cid:20) (cid:21)
R N zI−A I finite impulse response (FIR) constraint F , and an arbitrary
= (6b) T
M L −C2 0 convex set component X, i.e., S =L∩FT ∩X. The locality
1 constraintLimposedonatransfermatrixGisacollectionof
R,M,N∈ RH , L∈RH . (6c)
z ∞ ∞ sparsityconstraintsoftheformG =0forsomeiandj.The
ij
The parameterization of all internally stabilizing controllers is constraintFT restrictstheoptimizationvariablestobeafinite
further given by the following theorem. impulse response transfer matrix of horizon T, thus making
Theorem 1 (Theorem 2 in [22]): Suppose that a set of optimizationproblem(9)finitedimensional.TheconstraintX
transfer matrices (R,M,N,L) satisfy the constraints (6a) includes any other convex constraint imposed on the system,
- (6c). Then an internally stabilizing controller yielding the such as communication constraints, bounds on system norms,
desired system response (5) can be implemented as etc. This leads to a localized SLS problem given by
zβ =R˜+β+N˜y (7a) minimize g(R,M,N,L) (11a)
{R,M,N,L}
u=M˜β+Ly (7b)
subject to (6a)−(6c) (11b)
where R˜+ = z(I −zR), N˜ = −zN, M˜ = zM, L are in (cid:20)R N(cid:21)
RH∞.Further,thesolutionsof(6a)-(6c)withtheimplemen- M L ∈L∩FT ∩X. (11c)
tation (7) parameterize all internally stabilizing controllers.
Remark 1: Equation (7) can be considered as a natural To design a controller that is scalable to implement, i.e.,
extension of the state space realization of a controller K, one in which each sub-controller needs to collect information
which is given by from O(1) other sub-controllers to compute its local control
action,istoimposesparsityconstraintsonthesystemresponse
zξ = AKξ+BKy (R,M,N,L) through the locality constraint L. Theorem
u = C ξ+D y. (8) 1 makes clear that the structure of the system response
K K
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 5
(R,M,N,L) translates directly to the structure, and hence the (localized) LQR [1] problem, that belong to the class
the implementation complexity, of the resulting controller (7). of column-wise separable problems. In Section III-C, we
For instance, if each row of these transfer matrices has a proposeadimensionreductionalgorithmtofurtherreducethe
small number of nonzero elements, then each sub-controller complexity of each column-wise subproblem from global to
only needs to collect a small number of measurements and local scale. This section ends with an overview of row-wise
controller states to compute its control action [1], [3], [22]. separableSLSproblemsthatarenaturallyseenasdualtotheir
This holds true for arbitrary convex objective functions g(·) column-wise separable counterparts.
and arbitrary convex constraints X.
Remark 2: For a state feedback problem (C2 = I and A. Column-wise Separable Problems
D =0), we showed in [22] that the localized SLS problem
21 We begin with a generic optimization problem given by
(11) can be simplified to
minimize g(Φ) (13a)
minimize g(R,M) Φ
{R,M} subject to Φ∈S, (13b)
(cid:20) (cid:21)
(cid:2) (cid:3) R
subject to zI−A −B2 M =I whereΦisam×ntransfermatrix,g(·)afunctionalobjective,
(cid:20) (cid:21) and S a set constraint. Our goal is to exploit the structure of
R
∈L∩F ∩X. (12) (13) to solve the optimization problem in a computationally
M T
efficientway.Recallthatweusecalligraphiclowercaseletters
In addition, the controller achieving the desired system re- suchasc todenotesubsetsofpositiveinteger.Let{c ,...,c }
1 p
sponsecanbeimplementeddirectlyusingthetransfermatrices be a partition of the set {1,...,n}. The optimization variable
R and M [22]. Φ can then be partitioned column-wise as {Φ(:,c ),...,Φ(:
1
Remark 3: Itshouldbenotedthatalthoughtheoptimization ,c )}. With the column-wise partition of the optimization
p
problems (11) and (12) are convex for arbitrary locality variable, we can then define the column-wise separability of
constraints L, they are not necessarily feasible. Hence in the the functional objective (13a) and the set constraint (13b) as
SLA framework, the burden is shifted from verifying the follows.
convexityofastructuredoptimalcontrolproblemtoverifying Definition 1: Thefunctionalobjectiveg(Φ)in(13a)issaid
its feasibility. In [4], we showed that a necessary condition to be column-wise separable with respect to the column-wise
for the existence of a localized (sparse) system response is partition {c ,...,c } if
1 p
thatthecommunicationspeedbetweensub-controllersisfaster
p
(cid:88)
than the speed of disturbance propagation in the plant. This g(Φ)= g (Φ(:,c )) (14)
j j
condition can be more restrictive than the delay conditions
j=1
[25] that must be satisfied for a system to be QI . However,
for some functionals g (·) for j =1,...,p.
whensuchalocalizedsystemresponseexists,weonlyrequire j
We now give a few examples of column-wise separable
fast but local communication to implement the controller.
objectives.
Example 2 (Frobenius norm): The square of the Frobenius
E. Problem Statement norm of a m×n matrix Φ is given by
Inpreviouswork[1]–[4],[22],weshowedthatbyimposing m n
(cid:88)(cid:88)
additional constraints L and F in (11), we can design a (cid:107)Φ(cid:107)2 = Φ2 .
T F ij
localized optimal controller that is scalable to implement. In i=1j=1
this paper, we further show that the localized SLS problem This objective function is column-wise separable with respect
(11) can be solved in a localized and scalable way if the to arbitrary column-wise partition.
SLO (11a) and the SLC (11c) satisfy certain separability Example 3 (H norm): The square of the H norm of a
2 2
properties. Specifically, when the localized SLS problem (11) transfer matrix Φ is given by
is separable and the locality constraint L is suitably specified,
∞
the global problem (11) can be decomposed into parallel (cid:107)Φ(cid:107)2 =(cid:88)(cid:107)Φ[t](cid:107)2, (15)
local subproblems. The complexity of synthesizing each sub- H2 F
t=0
controller in the network is then independent of the size of
which is column-wise separable with respect to arbitrary
the global system.
column-wise partition.
Definition 2: The set constraint S in (13b) is said to be
III. COLUMN-WISESEPARABLESLSPROBLEMS column-wise separable with respect to the column-wise parti-
Inthissection,weconsiderthestatefeedbacklocalizedSLS tion {c ,...,c } if the following condition is satisfied:
1 p
problem (12), which is a special case of (11). We begin by
Φ∈S if and only if Φ(:,c )∈S for j =1,...,p (16)
defining the notion of a column-wise separable optimization j j
problem. We then show that under suitable assumptions on for some sets S for j =1,...,p.
j
the objective function and the additional constraint set S, the Example 4 (Affine Subspace): The affine subspace con-
state-feedbackSLSproblem(12)satisfiestheseconditions.We straint
then give examples of optimal control problems, including GΦ=H
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 6
is column-wise separable with respect to arbitrary column- localized SLS problem (18) is a column-wise separable SLS
wise partition. Specifically, we have problem. Specifically, (18) can be partitioned into p parallel
subproblems as
GΦ(:,c )=H(:,c )
j j
minimize g (Φ(:,c )) (19a)
for c any subset of {1,...,n}. j j
j Φ(:,cj)
Example 5 (Locality and FIR Constraints): The locality
subject to Z Φ(:,c )=I(:,c ) (19b)
AB j j
and FIR constraints introduced in Section II-D
Φ(:,c )∈L(:,c )∩F ∩X (19c)
j j T j
Φ∈L∩F
T
for j =1,...,p.
are column-wise separable with respect to arbitrary column- Herewegiveafewexamplesofcolumn-wiseseparableSLS
wise partition. This follows from the fact that both locality problems.
and FIR constraints can be encoded via sparsity structure: the Example 6 (LLQR with uncorrelated disturbance): In [1],
resulting linear subspace constraint is trivially column-wise
we formulate the LLQR problem with uncorrelated process
separable.
noise (i.e., B =I) as
1
Assume now that the objective function (13a) and the set
(cid:20) (cid:21)
constraint (13b) are both column-wise separable with respect minimize ||(cid:2)C D (cid:3) R ||2 (20a)
to a column-wise partition {c1,...,cp}. In this case, we say {R,M} 1 12 M H2
that optimization problem (13) is a column-wise separable (cid:2) (cid:3)(cid:20)R(cid:21)
subject to zI−A −B =I (20b)
problem. Specifically, (13) can be partitioned into p parallel 2 M
subproblems as (cid:20) (cid:21)
R 1
∈L∩F ∩ RH . (20c)
minimize g (Φ(:,c )) (17a) M T z ∞
j j
Φ(:,cj)
The SLO (20a) and the SLC (20c) are both column-wise
subject to Φ(:,c )∈S (17b)
j j
separable with respect to arbitrary column-wise partition. The
for j =1,...,p. separable property of the SLO is implied by the separability
of the H norm and the assumption that the process noise is
2
pairwiseuncorrelated.Theseparabilityoftheconstraints(20b)
B. Column-wise Separable SLS Problems
-(20c)isfollowsfromtheabovediscussionpertainingtoaffine
We now specialize our discussion to the localized SLS
subspaces. The physical interpretation of the column-wise
problem (12). We use
separable property is that we can analyze the system response
Φ=(cid:20)R(cid:21) of each local process disturbance δxj in an independent and
M parallel way, and then exploit the superposition principle
satisfied by LTI systems to reconstruct the full solution to
to represent the system response we want to optimize for, and
(cid:2) (cid:3) the LLQR problem. We also note that removing the locality
wedenoteZ thetransfermatrix zI−A −B .Thestate
AB 2 and FIR constraints does not affect column-wise separability,
feedback localized SLS problem (12) can then be written as
and hence the standard LQR problem with uncorrelated noise
minimize g(Φ) (18a) is also column-wise separable.
Φ Example 7 (LLQR with locally correlated disturbances):
subject to Z Φ=I (18b)
AB Building on the previous example, we now assume that the
Φ∈S, (18c) covariance of the process noise δ is given by B(cid:62)B for
x 1 1
somematrixB ,i.e.,δ ∼N(0,B(cid:62)B ).Inaddition,suppose
withS =L∩F ∩X.Notethattheaffinesubspaceconstraint 1 x 1 1
T that there exists a permutation matrix Π such that the matrix
(18b) is column-wise separable with respect to any column-
ΠB is block diagonal. This happens when the global noise
wise partition, and thus the overall problem is column-wise 1
vector δ can be partitioned into uncorrelated subsets. The
separable with respect to a given column-wise partition if the x
SLO for this LLQR problem can then be written as
SLO(18a)andSLC(18c)arealsocolumn-wiseseparablewith
respect to that partition. ||(cid:2)C D (cid:3)ΦB ||2 =||(cid:2)C D (cid:3)ΦΠ(cid:62)ΠB ||2 .
Remark 4: Recall that the locality constraint L and the 1 12 1 H2 1 12 1 H2
FIR constraint F are column-wise separable with respect NotethatΦΠ(cid:62) isacolumn-wisepermutationoftheoptimiza-
T
to any column-wise partition. Therefore, the column-wise tion variable. We can define a column-wise partition on ΦΠ(cid:62)
separabilityoftheSLCS =L∩F ∩X in(18)isdetermined accordingtotheblockdiagonalstructureofthematrixΠB to
T 1
by the column-wise separability of the constraint X. If S is decompose the SLO in a column-wise manner. The physical
column-wise separable, then we can express the set constraint meaning is that we can analyze the system response of each
S in (25) as S = L(:,c )∩F ∩X for some X for each uncorrelated subset of the noise vector in an independent and
j j j T j j
j. parallel way.
Assume that the SLO (18a) and the SLC (18c) are both Example 8 (Element-wise (cid:96) ): In the LLQR examples
1
column-wiseseparablewithrespecttoacolumn-wisepartition above, we rely on the separable property of the H norm,
2
{c ,...,c }. In this case, we say that the state feedback as shown in (15). Motivated by the H norm, we define the
1 p 2
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 7
element-wise (cid:96) norm (denoted by e ) of a transfer matrix Figure 2. We analyze the first column of the LLQR problem
1 1
G∈F as (20), which is given by
T
||G||e1 =(cid:88)(cid:88)(cid:88)T |Gij[t]|. {Rm(:,i1n)i,mMi(z:,e1)} ||(cid:2)C1 D12(cid:3)(cid:20)MR((::,,11))(cid:21)||2H2 (22a)
i j t=0 (cid:20) (cid:21)
(cid:2) (cid:3) R(:,1)
The previous examples still hold if we change the square of subject to zI−A −B2 M(:,1) =I(:,1) (22b)
the H2 norm to the element-wise (cid:96)1 norm. (cid:20)R(:,1)(cid:21) 1
∈L(:,1)∩F ∩ RH . (22c)
M(:,1) T z ∞
C. Dimension Reduction Algorithm
The optimization variables are the system response from the
In this subsection, we discuss how the locality constraint
disturbance w to the global state x and global control action
L(:,c ) in (19c) can be exploited to reduce the dimension of 1
j u. Assume that the locality constraint imposed on R and M
eachsubproblem(19)fromglobaltolocalscale,thusyielding
are such that the closed loop response due to w satisfies
a localized synthesis procedure. The general and detailed 1
x = ··· = x = 0 and u = ··· = u = 0. In this
dimensionreductionalgorithmforarbitrarylocalityconstraints 3 n 4 n
case, we can solve (22) using only the information contained
canbefoundinAppendixA.Hereweratherhighlightthemain
within the localized region shown in Figure 2. Intuitively,
results and consequences of this approach through the use of
as long as the boundary constraint x = 0 is enforced, the
some examples. 3
perturbation on the system caused by w cannot propagate
In particular, in Appendix A we show that the optimization 1
beyond the localized region (this intuition is formalized in
subproblem (19) is equivalent to
Appendix A). Note that the complexity to analyze the system
minimize g¯ (Φ(s ,c )) (21a) response of w is completely independent of the size of the
j j j 1
Φ(sj,cj) global system. Even more dramatically, one could replace the
subject to ZAB(tj,sj)Φ(sj,cj)=I(tj,cj) (21b) section of the system outside of the localized region with any
Φ(s ,c )∈L(s ,c )∩F ∩X¯ (21c) other set of dynamics, and the localized synthesis problem
j j j j T j
(22c) addressing the system response to w would remain
where s and t are sets of positive integers, and g¯ and 1
j j j unchanged. This localized disturbance-wise analysis can then
X¯ are objective function and constraint set restricted to the
j be performed independently for all other disturbances by
reduced dimensional space, respectively. Roughly speaking,
exploitingthesuperpositionprinciplesatisfiedbyLTIsystems.
the set s is the collection of optimization variables contained
j
within the localized region specified by L(:,c ), and the set
j
t is the collection of states that are directly affected by the D. Row-wise Separable SLS Problems
j
optimizationvariablesinsj.Thecomplexityofsolving(21)is The technique described in the previous subsections can be
determined by the cardinality of the sets cj, sj, and tj, which readily extended to row-wise separable SLS problems – we
in turn are determined by the number of nonzero elements briefly introduce the corresponding definitions and end with
allowed by the locality constraint and the structure of the an example of a row-wise separable SLS problem.
systemstate-spacematrices(2).Forinstance,thecardinalityof Consider a state estimation SLS problem [22] given by
thesets isequaltothenumberofnonzerorowsofthelocality
j
constraintL(:,c ).Whenthelocalityconstraintandthesystem minimize g(Φ) (23a)
j Φ
matrices are suitably sparse, it is possible to make the size of
subject to ΦZ =I (23b)
these sets much smaller than the size of the global network. AC
Φ∈S. (23c)
In this case, the global optimization subproblem (19) reduces
toalocaloptimizationsubproblem(21)whichdependsonthe with Φ = (cid:2)R N(cid:3) and Z = (cid:2)zI−A(cid:62) −C(cid:62)(cid:3)(cid:62). Let
local plant model Z (t ,s ) only. AC 2
AB j j {r ,...,r } be a partition of the set {1,...,n }. The row-
1 q x
wise separability of the SLO and SLC in (23) are defined as
follows.
…… Definition 3: The system level objective g(Φ) in (23a) is
said to be row-wise separable with respect to the row-wise
partition {r ,...,r } if
1 q
……
q
(cid:88)
g(Φ)= g (Φ(r ,:)) (24)
Localized Region j j
j=1
Fig.2. Thelocalizedregionforw1 for some functionals gj(·) for j =1,...,q.
Definition 4: The system level constraint S in (23c) is said
We illustrate the idea of dimension reduction through a
toberow-wiseseparablewithrespecttotherow-wisepartition
simple example below.
{r ,...,r } if
1 p
Example 9: Consider a chain of n LTI subsystems with a
single disturbance w directly affecting state x , as shown in Φ∈S if and only if Φ(r ,:)∈S for j =1,...,p (25)
1 1 j j
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 8
for some sets S for j =1,...,p. beappliedtosolvetheiteratesubproblemsusinglocalizedand
j
An example of a SLS problem satisfying row-wise sep- parallel computation.
arabaility is the localized distributed Kalman filter (LDKF) Definition 5: Let {r ,...,r } be a partition of the set
1 q
problem in [2] (naturally seen as the dual to the LLQR {1,··· ,m} and {c ,...,c } be a partition of the set
1 p
problem), which can be posed as {1,··· ,n}. The functional objective g(Φ) in (26a) is said to
(cid:20) (cid:21) be partially separable with respect to the row-wise partition
mi{nRim,Ni}ze ||(cid:2)R N(cid:20)(cid:3) DB211 (cid:21)||2H2 {car1n,.b.e.,wrqr}ittaenndtahsetchoelusmumn-woifsegp(ar)r(tiΦtio)na{ncd1,g..(c.),(cΦp}),ifwgh(eΦre)
subject to (cid:2)R N(cid:3) zI−A =I g(r)(Φ) is row-wise separable with respect to the row-wise
−C
2 partition {r ,...,r } and g(c)(Φ) is column-wise separable
1 q
(cid:2)R N(cid:3)∈(cid:2)LR LN(cid:3)∩FT ∩ 1RH∞. withrespecttothecolumn-wisepartition{c1,...,cp}.Specif-
z ically, we have
IV. PARTIALLYSEPARABLESLSPROBLEMS g(Φ) = g(r)(Φ)+g(c)(Φ)
In this section, we define the notion of partially-separable q
(cid:88)
objectives and constraints, and show how these properties can g(r)(Φ) = g(r)(Φ(r ,:))
j j
be exploited to solve the general form of the localized SLS
j=1
problem(11)inalocalizedwayusingdistributedoptimization p
(cid:88)
techniques such as ADMM.2 We further show that many g(c)(Φ) = gj(c)(Φ(:,cj)) (28)
constrained optimal control problems are partially separable. j=1
Examples include the (localized) H optimal control problem
withjointsensorandactuatorregular2izationandthe(localized) for some functionals g(r)(·) for j = 1,...,q, and g(c)(·) for
j j
mixed H /L optimal control problem. j =1,...,p.
2 1
Similartotheorganizationoftheprevioussection,webegin Remark 5: The column and row-wise separable objective
by defining the partial separability of a generic optimization functions defined in Definition 1 and 3, respectively, are
problem. We then specialize our discussion to the partially special cases of partially separable objective functions.
separable SLS problem (11), before ending this section with Example 10: LetΦbetheoptimizationvariable,andE and
a discussion of computational techniques that can be used to F be two matrices of compatible dimension. The objective
acceleratetheADMMalgorithmasappliedtosolvingpartially function
separable SLS problems. (cid:107)EΦ(cid:107)2 +(cid:107)ΦF(cid:107)2
F F
A. Partially Separable Problems is then partially separable. Specifically, the first term is
column-wise separable, and the second term is row-wise
We begin with a generic optimization problem given by
separable.
minimize g(Φ) (26a) Definition 6: The set constraint S in (26b) is said to be
Φ
partially separable with respect to the row-wise partition
subject to Φ∈S, (26b)
{r ,...,r } and the column-wise partition {c ,...,c } if S
1 q 1 p
whereΦisam×ntransfermatrix,g(·)afunctionalobjective, can be written as the intersection of two sets S(r) and S(c),
and S a set constraint. We assume that the optimization whereS(r) isrow-wiseseparablewithrespecttotherow-wise
problem (26) can be written in the form partition {r ,...,r } and S(c) is column-wise separable with
1 q
respect to the column-wise partition {c ,...,c }. Specifically,
minimize g(r)(Φ)+g(c)(Φ) (27a) 1 p
Φ we have
subject to Φ∈S(r)∩S(c), (27b)
S = S(r)∩S(c)
where g(r)(·) and S(r) are row-wise separable and g(c)(·) and Φ∈S(r) ⇐⇒ Φ(r ,:)∈S(r),∀j
S(c) are column-wise separable. If optimization problem (26) j j
can be written in the form of problem (27), then optimization Φ∈S(c) ⇐⇒ Φ(:,cj)∈Sj(c),∀j (29)
problem (26) is called partially separable.
Due to the coupling between the row-wise and column- for some sets S(r) for j =1,...,q and S(c) for j =1,...,p.
j j
wiseseparablecomponents,optimizationproblem(27)admits Remark 6: The column and row-wise separable constraints
neither a row-wise nor a column-wise decomposition. Our defined in Definition 2 and 4, respectively, are special cases
strategy is to instead use distributed optimization techniques of partially separable constraints.
such as ADMM to decouple the row-wise separable compo- Example 11 (Affine Subspace): The affine subspace con-
nent and the column-wise separable components: we show straint
that this leads to iterate subproblems that are column/row-
GΦ=H and ΦE=F
wise separable, allowing the results of the previous section to
is partially separable. Specifically, the first constraint is
2Notethat(11)isneithercolumn-wisenorrow-wiseseparableduetothe
column-wise separable, and the second is row-wise separable.
couplingconstraints(6a)and(6b).
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 9
Example 12 (Induced Matrix Norm): The induced 1-norm Assumption 2: The functionals g(r)(·) and g(c)(·) are
of a matrix Φ is given by closed, proper, and convex.
Assumption 3: The sets S(r) and S(c) are closed and con-
m
(cid:88)
(cid:107)Φ(cid:107) = max |Φ |, vex.
1 ij
1≤j≤n
i=1
and the induced ∞-norm of a matrix is given by B. Partially Separable SLS Problems
n We now specialize our discussion to the output feedback
(cid:88)
(cid:107)Φ(cid:107)∞ = max |Φij|. SLS problem (11). We use
1≤i≤m
j=1 (cid:20) (cid:21)
R N
The following constraint is partially separable: Φ= M L
(cid:107)Φ(cid:107)1 ≤γ1 and (cid:107)Φ(cid:107)∞ ≤γ∞. to represent the system response we want to optimize for.
(cid:2) (cid:3)
DenoteZ thetransfermatrix zI−A −B andZ the
Specifically, the first term is column-wise separable, and the AB 2 AC
transfer matrix (cid:2)zI−A(cid:62) −C(cid:62)(cid:3)(cid:62). Let J be the matrix in
second term is row-wise separable. 2 B
the right-hand-side of (6a) and J be the matrix in the right-
Assume now that the objective and the constraints in C
hand-side of (6b). The localized SLS problem (11) can then
optimization problem (26) are both partially separable with
be written as
respecttoarow-wisepartition{r ,...,r }andacolumn-wise
1 q
partition {c1,...,cp}. In this case, the optimization problem minimize g(Φ) (33a)
(26) is partially separable and can be rewritten in the form Φ
specifiedbyequation(27).WenowproposeanADMMbased subject to ZABΦ=JB (33b)
algorithm that exploits the partially separable structure of the ΦZ =J (33c)
AC C
optimization problem. As we show in the next subsection,
Φ∈S (33d)
when this method is applied to the SLS problem (11), the
iterate subproblems are column/row-wise separable, allowing withS =L∩F ∩X.Notethatasalreadydiscussed,theaffine
T
us to apply the methods described in the previous section. In subspace constraints (33b) and (33c) are partially separable
particular, if locality constraints are imposed, iterate subprob- withrespecttoarbitrarycolumn-wiseandrow-wisepartitions,
lems can be solved using localized and parallel computation. respectively. Thus the SLS problem (33) is partially separable
Let Ψ be a duplicate of the optimization variable Φ. if the SLO (33a) and the SLC Φ∈X are partially separable.
We define the extended-real-value functionals h(r)(Φ) and In particular, if X is partially separable, we can express the
h(c)(Ψ) by original SLC S as an intersection of the sets S(r) =L∩F ∩
T
(cid:26) g(r)(Φ) if Φ∈S(r) X(r) and S(c) = L∩FT ∩X(c), where X(r) is a row-wise
h(r)(Φ)= separable component of X and X(c) a column-wise separable
∞ otherwise
componentofX.NotethatthelocalityconstraintandtheFIR
(cid:26)
h(c)(Ψ)= g(c)(Ψ) if Ψ∈S(c) (30) constraint are included in both S(r) and S(c). This is the key
∞ otherwise. point to allow the subroutines (32a) - (32b) of the ADMM
algorithm to be solved using the techniques described in the
Problem (27) can then be reformulated as
previous section.
minimize h(r)(Φ)+h(c)(Ψ) To illustrate how the iterate subproblems of the ADMM
{Φ,Ψ}
algorithm (32), as applied to the SLS problem (33), can be
subject to Φ=Ψ. (31)
solved using localized and parallel computation, we analyze
Problem (31) can be solved using ADMM [23]: inmoredetailtheiteratesubproblem(32b),whichisgivenby
(cid:16) ρ (cid:17)
Φk+1 =argmin h(r)(Φ)+ ||Φ−Ψk+Λk||2 ρ
Φ 2 H2 miniΨmize g(c)(Ψ)+ 2||Ψ−Φk+1−Λk||2H2 (34a)
(32a)
subject to Ψ∈S(c) (34b)
(cid:16) ρ (cid:17)
Ψk+1 =argmin h(c)(Ψ)+ ||Ψ−Φk+1−Λk||2
Ψ 2 H2 with S(c) =L∩FT∩X(c). The H2 norm regularizer in (34a)
(32b) is column-wise separable with respect to arbitrary column-
Λk+1 =Λk+Φk+1−Ψk+1 (32c) wise partition. As the objective g(c)(·) and the constraint S(c)
are column-wise separable with respect to a given column-
where the square of the H norm is computed as in (15). wise partition, we can use the column-wise separation tech-
2
In Appendix B, we provide stopping criterion and prove nique described in the previous section to decompose (34)
convergence of the ADMM algorithm (32a) - (32c) to an into parallel subproblems. We can then exploit the locality
optimal solution of (27) (or equivalently, (31)) under the constraint L and use the technique described in Section III-C
following assumptions. and Appendix A to reduce the dimension of each subproblem
Assumption 1: Problem (27) has a feasible solution in the from global to local scale. Similarly, iterate subproblem (32a)
relative interior of the set S. canalsobesolvedusingparallelandlocalizedcomputationby
SUBMITTEDTOIEEETRANSACTIONSONAUTOMATICCONTROL,VOL.XX,NO.XX,JANUARY2017 10
exploiting its row-wise separability. Finally, update equation summation.Therefore,wecancombinethepartiallyseparable
(32c) trivially decomposes element-wise since it is a matrix SLOs described above, and the resulting SLO is still partially
addition. separable. For instance, consider the SLO given by
We now give some examples of partially separable SLOs.
(cid:20) (cid:21)(cid:20) (cid:21)
Example 13 (Norm Optimal Control): Consider the SLO g(R,M,N,L) = ||(cid:2)C D (cid:3) R N B1 ||2
of the distributed optimal control problem in (10), with the 1 12 M L D21 H2
normgivenbyeitherthesquareoftheH2normortheelement- (cid:2) (cid:3) (cid:20)N(cid:21)
wise (cid:96)1 norm defined in Example 8. Suppose that there exists +||µ M L ||U +|| L λ||Y (37)
a permutation matrix Π such that the matrix (cid:2)B(cid:62) D(cid:62)(cid:3)Π
1 21
is block diagonal. Using a similar argument as in Example 7,
where µ and λ are the relative penalty between the H
2
we can find a column-wise partition to decompose the SLO
performance, actuator and sensor regularizer, respectively. If
in a column-wise manner. Similarly, suppose that there exists
there exists a permutation matrix Π such that the matrix
(cid:2) (cid:3)
a permutation matrix Π such that the matrix C D Π is (cid:2) (cid:3)
1 12 C D Π is block diagonal, then the SLO (37) is par-
1 12
blockdiagonal.Wecanfindarow-wisepartitiontodecompose
tially separable. Specifically, the H norm and the actuator
2
the SLO in a row-wise manner. In both cases, the SLO is
regularizer belong to the row-wise separable component, and
column/row-wise separable and thus partially separable.
the sensor regularizer belongs to the column-wise separable
Example 14 (Sensor and Actuator Norm): Consider the component.3
weighted actuator norm defined in [4], [24], which is given
WenowprovidesomeexamplesofpartiallyseparableSLCs.
by
||µ(cid:2)M L(cid:3)||U =(cid:88)nu µi||e(cid:62)i (cid:2)M L(cid:3)||H2 (35) Example 16 (L1 Constraint): The L1 norm of a transfer
matrix is given by its worst case (cid:96) to (cid:96) gain. In particular,
i=1 ∞ ∞
the L norm [27] of a FIR transfer matrix G ∈ F is given
where µ is a diagonal matrix with µ being its ith diagonal 1 T
i by
entry, and e is a unit vector with 1 on its ith entry and 0
i
elsewhere.Theactuatornorm(35)canbeviewedasaninfinite T
(cid:88)(cid:88)
dimensional analog to the weighted (cid:96)1/(cid:96)2 norm, also known ||G||L1 =miax |gij[t]|. (38)
as the group lasso [26] in the statistical learning literature. j t=0
Adding this norm as a regularizer to a SLS problem induces
row-wise sparse structure in the the transfer matrix (cid:2)M L(cid:3). We can therefore add the constraint
Recall from Theorem 1 that the controller achieving the
(cid:20) (cid:21)(cid:20) (cid:21)
desired system response can be implemented by (7). If the ith || R N B1 || ≤γ (39)
row of the transfer matrix (cid:2)M L(cid:3) is identically zero, then M L D21 L1
the ith component of the control action u is always equal
i
to zero, and therefore the actuator at node i corresponding to the optimization problem (33) for some γ to control the
to control action ui can be removed without changing the worst-case amplification of (cid:96)∞ bounded signals. From the
closedloopresponse.Itisclearthattheactuatornormdefined definition (38), the SLC (39) is row-wise separable with
in (35) is row-wise separable with respect to arbitrary row- respect to any row-wise partition.
wise partition. This still holds true when the actuator norm is Example 17 (Combination): From Definition 6, the class
defined by the (cid:96) /(cid:96) norm. Similarly, consider the weighted of partially separable SLCs with respect to the same row
1 ∞
sensor norm given by and column partitions are closed under intersection, allowing
for partially separable SLCS to be combined. For instance,
(cid:20)N(cid:21) (cid:88)ny (cid:20)N(cid:21) the combination of a locality constraint L, a FIR constraint
|| λ|| = λ || e || (36)
L Y i L i H2 F , and an L constraint as in equation (39) is partially
T 1
i=1
separable. This property is extremely useful as it provides a
whereλisadiagonalmatrixwithλi beingitsithdiagonalen- unified framework for dealing with several partially separable
try.Thesensornorm(36),whenaddedasaregularizer,induces constraints at once.
column-wise sparsity in the transfer matrix (cid:2)N(cid:62) L(cid:62)(cid:3)(cid:62).
Usingthepreviouslydescribedexamplesofpartiallysepara-
Using the controller implementation (7), the sensor norm can
ble SLOs and SLCs, we now consider two partially separable
thereforebeviewedasregularizingthenumberofsensorsused
SLS problems: (i) the localized H optimal control problem
2
by a controller. For instance, if the ith column of the transfer
with joint sensor and actuator regularization, and (ii) the
matrix (cid:2)N(cid:62) L(cid:62)(cid:3)(cid:62) is identically zero, then the sensor at localized mixed H /L optimal control problem. These two
2 1
node i and its corresponsing measurement yi can be removed problems are used in Section V as case study examples.
without changing the closed loop response. The sensor norm
Example 18: The localized H optimal control problem
2
defined in (35) is column-wise separable with respect to any
column-wise partition.
Example 15 (Combination): From Definition 5, it is 3Notethatanalternativepenaltyisproposedin[24]forthedesignofjoint
actuation and sensing architectures; it is however more involved to define,
straightforward to see that the class of partially separable
and hence we restrict ourself to this alternative penalty for the purposes of
SLOs with respect to the same partitions are closed under illustratingtheconceptofpartiallyseparableSLOs.