Problem formulation

Noise-corrupted leader selection

We consider networks in which each node updates a scalar state psi_i,

         dot{psi}_i         ; = ;         u_i         ; + ;         w_i,         ~~         i ,=, 1,ldots,n


where u_i is the control input and w_i is the white stochastic disturbance with zero-mean and unit-variance. A node is a follower if it uses only relative information exchange with its neighbors to form its control action,

         u_i         ,=,         - sum_{j,in,{cal N}_i}         ( psi_i ,-, psi_j ).


A node is a leader if, in addition to relative information exchange with its neighbors, it also has access to its own state

         u_i         ,=,         -         sum_{j,in,{cal N}_i}         ( psi_i ,-, psi_j )         ,-,         kappa_i , psi_i.


Here, kappa_i is a positive number and {cal N}_i is the set of all nodes that node i communicates with. A state-space representation of the leader-follower consensus network is thus given by

         dot{psi}         ,=,         - ,         (L ,+, {rm diag} ,(kappa) , {rm diag} ,(x) )         , psi         ,+,         w


where x is a Boolean-valued vector with its ith entry x_i in {0, , 1}, indicating that node i is a leader if x_i = 1 and that node i is a follower if x_i = 0. The steady-state covariance matrix of psi

         Sigma         , = ,         displaystyle{lim_{t , to , infty}}         {cal E}         left(         psi(t) , psi^T(t)         right)


can be determined from the Lyapunov equation

         (L ,+, {rm diag} ,(kappa) , {rm diag} ,(x) )         ,         Sigma         , + ,         Sigma         ,         (L ,+, {rm diag} ,(kappa) , {rm diag} ,(x) )         ; = ;         I


where {cal E}(cdot) is the expectation operator and {cal E} left( w(t) , w^T(tau) right) = I , delta (t - tau). The unique solution of the Lyapunov equation is given by

         Sigma          ,=,         frac{1}{2} , (L ,+, {rm diag} ,(kappa) , {rm diag} ,(x) )^{-1}.


We use the total steady-state variance

         {rm trace}         left(         Sigma         right)         ,=,         frac{1}{2}         , {rm trace} left( (L ,+, {rm diag} ,(kappa) , {rm diag} ,(x) )^{-1} right)


to quantify deviation from consensus of stochastically forced networks. Thus, the problem of identifying N_l < n leaders that are most effective in reducing the mean-square deviation can be formulated as

         begin{array}{lrcl}         {rm minimize}         &         J(x)         & = &         {rm trace}         left(         (L ,+, {rm diag} , (kappa) , {rm diag} , (x) )^{-1}         right)         [0.25cm]         {rm subject~to}         &         x_i         & in &         {0,1},         ~~~~~         i ; = ; 1,ldots,n         [0.15cm]         &                 displaystyle{sum_{i , = , 1}^n} , x_i         & = &          N_l         end{array}         ~~~~~~~         ({rm LS1})

where the graph Laplacian L and the vector kappa with positive elements are problem data, and the Boolean-valued vector x with cardinality N_l is the optimization variable.

In sensor networks, it can be shown that ({rm LS1}) is equivalent to the problem of choosing N_l absolute position measurements among n sensors in order to minimize the variance of the estimation error; see our paper for details.

Noise-free leader selection

Since the leaders are subject to stochastic disturbances, we refer to ({rm LS1}) as the noise-corrupted leader selection problem. We also consider the selection of noise-free leaders which follow their desired trajectories at all times. Equivalently, in coordinates that determine deviation from the desired trajectory, the state of every leader is identically equal to zero, and the network dynamics are thereby governed by the dynamics of the followers

         dot{psi}_f         ; = ;         - , L_f , psi_f         ; + ;         w_f.


Here, L_f is obtained from L by eliminating all rows and columns associated with the leaders.

Thus, the problem of selecting leaders that minimize the steady-state variance of psi_f amounts to

         begin{array}{lrcl}         {rm minimize}         &         J_f(x)         & = &         {rm trace}         ,         (L_f^{-1})         [0.25cm]         {rm subject~to}         &         x_i         & in &         {0,1},         ~~~~~         i ; = ; 1,ldots,n         [0.15cm]         &                 displaystyle{sum_{i , = , 1}^n} , x_i         & = &          N_l         end{array}         ~~~~~~~         ({rm LS2})

where the Boolean-valued vector x with cardinality N_l is the optimization variable. The index set of nonzero elements of x identifies the rows and columns of the graph Laplacian L that need to be deleted in order to obtain L_f. For example, for a path graph with three nodes, assigning the center node as a noise-free leader implies that x ,=, [~0~1~0~]^T. Thus, deleting the 2nd row and column of the graph Laplacian

     L ;=;     left[     begin{array}{rrr}     1   & -1 & 0        -1  & 2  & -1       0   & -1 & 1     end{array}     right]


yields the corresponding principal submatrix

     L_f ;=;     left[     begin{array}{rr}     1   & 0        0   & 1     end{array}     right].


Since the objective function J_f in {rm (LS2)} is not expressed explicitly in terms of the optimization variable x, it is difficult to examine its basic properties including convexity. In our paper, we show that J_f(x) can be equivalently written as

         J_f(x)         , = ,         {rm trace}         left(         ( L circ (({bf 1} - x)({bf 1} - x)^T) ,+, {rm diag} left( x right) )^{-1}         right)          -         {bf 1}^T x


where circ is the elementwise multiplication of two matrices and {bf 1}  in {bf R}^n is the vector of all ones.

Finally, in sensor networks, it can be shown that {rm (LS2)} amounts to choosing N_l sensors with a priori known positions such that the variance of the estimation error is minimized; see our paper for details.