A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://en.wikipedia.org/wiki/Controllability below:

Controllability - Wikipedia

Dynamic system property

Controllability is an important property of a control system and plays a crucial role in many regulation problems, such as the stabilization of unstable systems using feedback, tracking problems, obtaining optimal control strategies, or, simply prescribing an input that has a desired effect on the state.

Controllability and observability are dual notions. Controllability pertains to regulating the state by a choice of a suitable input, while observability pertains to being able to know the state by observing the output (assuming that the input is also being observed).

Broadly speaking, the concept of controllability relates to the ability to steer a system around in its configuration space using only certain admissible manipulations. The exact definition varies depending on the framework or the type of models dealt with.

The following are examples of variants of notions of controllability that have been introduced in the systems and control literature:

State controllability[edit]

The state of a deterministic system, which is the set of values of all the system's state variables (those variables characterized by dynamic equations), completely describes the system at any given time. In particular, no information on the past of a system is needed to help in predicting the future, if the states at the present time are known and all current and future values of the control variables (those whose values can be chosen) are known.

Complete state controllability (or simply controllability if no other context is given) describes the ability of an external input (the vector of control variables) to move the internal state of a system from any initial state to any final state in a finite time interval.[1]: 737 

That is, we can informally define controllability as follows: If for any initial state x 0 {\displaystyle \mathbf {x_{0}} } and any final state x f {\displaystyle \mathbf {x_{f}} } there exists an input sequence to transfer the system state from x 0 {\displaystyle \mathbf {x_{0}} } to x f {\displaystyle \mathbf {x_{f}} } in a finite time interval, then the system modeled by the state-space representation is controllable. For the simplest example of a continuous, LTI system, the row dimension of the state space expression x ˙ = A x ( t ) + B u ( t ) {\displaystyle {\dot {\mathbf {x} }}=\mathbf {A} \mathbf {x} (t)+\mathbf {B} \mathbf {u} (t)} determines the interval; each row contributes a vector in the state space of the system. If there are not enough such vectors to span the state space of x {\displaystyle \mathbf {x} } , then the system cannot achieve controllability. It may be necessary to modify A {\displaystyle \mathbf {A} } and B {\displaystyle \mathbf {B} } to better approximate the underlying differential relationships it estimates to achieve controllability.

Controllability does not mean that a reached state can be maintained, merely that any state can be reached.

Controllability does not mean that arbitrary paths can be made through state space, only that there exists a path within a finite time interval. When the time interval can also be specified, the dynamical system is often referred to as being strongly controllable.

Continuous linear systems[edit]

Consider the continuous linear system [note 1]

x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) u ( t ) {\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)+B(t)\mathbf {u} (t)}
y ( t ) = C ( t ) x ( t ) + D ( t ) u ( t ) . {\displaystyle \mathbf {y} (t)=C(t)\mathbf {x} (t)+D(t)\mathbf {u} (t).}

There exists a control u {\displaystyle u} from state x 0 {\displaystyle x_{0}} at time t 0 {\displaystyle t_{0}} to state x 1 {\displaystyle x_{1}} at time t 1 > t 0 {\displaystyle t_{1}>t_{0}} if and only if x 1 − ϕ ( t 0 , t 1 ) x 0 {\displaystyle x_{1}-\phi (t_{0},t_{1})x_{0}} is in the column space of

W ( t 0 , t 1 ) = ∫ t 0 t 1 ϕ ( t 0 , t ) B ( t ) B ( t ) T ϕ ( t 0 , t ) T d t {\displaystyle W(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi (t_{0},t)B(t)B(t)^{T}\phi (t_{0},t)^{T}dt}

where ϕ {\displaystyle \phi } is the state-transition matrix, and W ( t 0 , t 1 ) {\displaystyle W(t_{0},t_{1})} is the Controllability Gramian.

In fact, if η 0 {\displaystyle \eta _{0}} is a solution to W ( t 0 , t 1 ) η = x 1 − ϕ ( t 0 , t 1 ) x 0 {\displaystyle W(t_{0},t_{1})\eta =x_{1}-\phi (t_{0},t_{1})x_{0}} then a control given by u ( t ) = − B ( t ) T ϕ ( t 0 , t ) T η 0 {\displaystyle u(t)=-B(t)^{T}\phi (t_{0},t)^{T}\eta _{0}} would make the desired transfer.

Note that the matrix W {\displaystyle W} defined as above has the following properties:

d d t W ( t , t 1 ) = A ( t ) W ( t , t 1 ) + W ( t , t 1 ) A ( t ) T − B ( t ) B ( t ) T , W ( t 1 , t 1 ) = 0 {\displaystyle {\frac {d}{dt}}W(t,t_{1})=A(t)W(t,t_{1})+W(t,t_{1})A(t)^{T}-B(t)B(t)^{T},\;W(t_{1},t_{1})=0}
W ( t 0 , t 1 ) = W ( t 0 , t ) + ϕ ( t 0 , t ) W ( t , t 1 ) ϕ ( t 0 , t ) T {\displaystyle W(t_{0},t_{1})=W(t_{0},t)+\phi (t_{0},t)W(t,t_{1})\phi (t_{0},t)^{T}} [2]
Rank condition for controllability[edit]

The Controllability Gramian involves integration of the state-transition matrix of a system. A simpler condition for controllability is a rank condition analogous to the Kalman rank condition for time-invariant systems.

Consider a continuous-time linear system Σ {\displaystyle \Sigma } smoothly varying in an interval [ t 0 , t ] {\displaystyle [t_{0},t]} of R {\displaystyle \mathbb {R} } :

x ˙ ( t ) = A ( t ) x ( t ) + B ( t ) u ( t ) {\displaystyle {\dot {\mathbf {x} }}(t)=A(t)\mathbf {x} (t)+B(t)\mathbf {u} (t)}
y ( t ) = C ( t ) x ( t ) + D ( t ) u ( t ) . {\displaystyle \mathbf {y} (t)=C(t)\mathbf {x} (t)+D(t)\mathbf {u} (t).}

The state-transition matrix ϕ {\displaystyle \phi } is also smooth. Introduce the n x m matrix-valued function M 0 ( t ) = ϕ ( t 0 , t ) B ( t ) {\displaystyle M_{0}(t)=\phi (t_{0},t)B(t)} and define

M k ( t ) {\displaystyle M_{k}(t)} = d k M 0 d t k ( t ) , k ⩾ 1 {\displaystyle {\frac {\mathrm {d^{k}} M_{0}}{\mathrm {d} t^{k}}}(t),k\geqslant 1} .

Consider the matrix of matrix-valued functions obtained by listing all the columns of the M i {\displaystyle M_{i}} , i = 0 , 1 , … , k {\displaystyle i=0,1,\ldots ,k} :

M ( k ) ( t ) := [ M 0 ( t ) , … , M k ( t ) ] {\displaystyle M^{(k)}(t):=\left[M_{0}(t),\ldots ,M_{k}(t)\right]} .

If there exists a t ¯ ∈ [ t 0 , t ] {\displaystyle {\bar {t}}\in [t_{0},t]} and a nonnegative integer k such that rank ⁡ M ( k ) ( t ¯ ) = n {\displaystyle \operatorname {rank} M^{(k)}({\bar {t}})=n} , then Σ {\displaystyle \Sigma } is controllable.[3]

If Σ {\displaystyle \Sigma } is also analytically varying in an interval [ t 0 , t ] {\displaystyle [t_{0},t]} , then Σ {\displaystyle \Sigma } is controllable on every nontrivial subinterval of [ t 0 , t ] {\displaystyle [t_{0},t]} if and only if there exists a t ¯ ∈ [ t 0 , t ] {\displaystyle {\bar {t}}\in [t_{0},t]} and a nonnegative integer k such that rank ⁡ M ( k ) ( t i ) = n {\displaystyle \operatorname {rank} M^{(k)}(t_{i})=n} .[3]

The above methods can still be complex to check, since it involves the computation of the state-transition matrix ϕ {\displaystyle \phi } . Another equivalent condition is defined as follow. Let B 0 ( t ) = B ( t ) {\displaystyle B_{0}(t)=B(t)} , and for each i ≥ 0 {\displaystyle i\geq 0} , define

B i + 1 ( t ) {\displaystyle B_{i+1}(t)} = A ( t ) B i ( t ) − d d t B i ( t ) . {\displaystyle A(t)B_{i}(t)-{\frac {\mathrm {d} }{\mathrm {d} t}}B_{i}(t).}

In this case, each B i {\displaystyle B_{i}} is obtained directly from the data ( A ( t ) , B ( t ) ) . {\displaystyle (A(t),B(t)).} The system is controllable if there exists a t ¯ ∈ [ t 0 , t ] {\displaystyle {\bar {t}}\in [t_{0},t]} and a nonnegative integer k {\displaystyle k} such that rank ( [ B 0 ( t ¯ ) , B 1 ( t ¯ ) , … , B k ( t ¯ ) ] ) = n {\displaystyle {\textrm {rank}}(\left[B_{0}({\bar {t}}),B_{1}({\bar {t}}),\ldots ,B_{k}({\bar {t}})\right])=n} .[3]

Consider a system varying analytically in ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} and matrices

A ( t ) = [ t 1 0 0 t 3 0 0 0 t 2 ] {\displaystyle A(t)={\begin{bmatrix}t&1&0\\0&t^{3}&0\\0&0&t^{2}\end{bmatrix}}} , B ( t ) = [ 0 1 1 ] . {\displaystyle B(t)={\begin{bmatrix}0\\1\\1\end{bmatrix}}.} Then [ B 0 ( 0 ) , B 1 ( 0 ) , B 2 ( 0 ) , B 3 ( 0 ) ] = [ 0 1 0 − 1 1 0 0 0 1 0 0 2 ] {\displaystyle [B_{0}(0),B_{1}(0),B_{2}(0),B_{3}(0)]={\begin{bmatrix}0&1&0&-1\\1&0&0&0\\1&0&0&2\end{bmatrix}}} and since this matrix has rank 3, the system is controllable on every nontrivial interval of R {\displaystyle \mathbb {R} } .

Continuous linear time-invariant (LTI) systems[edit]

Consider the continuous linear time-invariant system

x ˙ ( t ) = A x ( t ) + B u ( t ) {\displaystyle {\dot {\mathbf {x} }}(t)=A\mathbf {x} (t)+B\mathbf {u} (t)}
y ( t ) = C x ( t ) + D u ( t ) {\displaystyle \mathbf {y} (t)=C\mathbf {x} (t)+D\mathbf {u} (t)}

where

x {\displaystyle \mathbf {x} } is the n × 1 {\displaystyle n\times 1} "state vector",
y {\displaystyle \mathbf {y} } is the m × 1 {\displaystyle m\times 1} "output vector",
u {\displaystyle \mathbf {u} } is the r × 1 {\displaystyle r\times 1} "input (or control) vector",
A {\displaystyle A} is the n × n {\displaystyle n\times n} "state matrix",
B {\displaystyle B} is the n × r {\displaystyle n\times r} "input matrix",
C {\displaystyle C} is the m × n {\displaystyle m\times n} "output matrix",
D {\displaystyle D} is the m × r {\displaystyle m\times r} "feedthrough (or feedforward) matrix".

The n × n r {\displaystyle n\times nr} controllability matrix is given by

R = [ B A B A 2 B . . . A n − 1 B ] {\displaystyle R={\begin{bmatrix}B&AB&A^{2}B&...&A^{n-1}B\end{bmatrix}}}

The system is controllable if the controllability matrix has full row rank (i.e. rank ⁡ ( R ) = n {\displaystyle \operatorname {rank} (R)=n} ).

Discrete linear time-invariant (LTI) systems[edit]

For a discrete-time linear state-space system (i.e. time variable k ∈ Z {\displaystyle k\in \mathbb {Z} } ) the state equation is

x ( k + 1 ) = A x ( k ) + B u ( k ) {\displaystyle {\textbf {x}}(k+1)=A{\textbf {x}}(k)+B{\textbf {u}}(k)}

where A {\displaystyle A} is an n × n {\displaystyle n\times n} matrix and B {\displaystyle B} is a n × r {\displaystyle n\times r} matrix (i.e. u {\displaystyle \mathbf {u} } is r {\displaystyle r} inputs collected in a r × 1 {\displaystyle r\times 1} vector). The test for controllability is that the n × n r {\displaystyle n\times nr} matrix

C = [ B A B A 2 B ⋯ A n − 1 B ] {\displaystyle {\mathcal {C}}={\begin{bmatrix}B&AB&A^{2}B&\cdots &A^{n-1}B\end{bmatrix}}}

has full row rank (i.e., rank ⁡ ( C ) = n {\displaystyle \operatorname {rank} ({\mathcal {C}})=n} ). That is, if the system is controllable, C {\displaystyle {\mathcal {C}}} will have n {\displaystyle n} columns that are linearly independent; if n {\displaystyle n} columns of C {\displaystyle {\mathcal {C}}} are linearly independent, each of the n {\displaystyle n} states is reachable by giving the system proper inputs through the variable u ( k ) {\displaystyle u(k)} .

Given the state x ( 0 ) {\displaystyle {\textbf {x}}(0)} at an initial time, arbitrarily denoted as k=0, the state equation gives x ( 1 ) = A x ( 0 ) + B u ( 0 ) , {\displaystyle {\textbf {x}}(1)=A{\textbf {x}}(0)+B{\textbf {u}}(0),} then x ( 2 ) = A x ( 1 ) + B u ( 1 ) = A 2 x ( 0 ) + A B u ( 0 ) + B u ( 1 ) , {\displaystyle {\textbf {x}}(2)=A{\textbf {x}}(1)+B{\textbf {u}}(1)=A^{2}{\textbf {x}}(0)+AB{\textbf {u}}(0)+B{\textbf {u}}(1),} and so on with repeated back-substitutions of the state variable, eventually yielding

x ( n ) = B u ( n − 1 ) + A B u ( n − 2 ) + ⋯ + A n − 1 B u ( 0 ) + A n x ( 0 ) {\displaystyle {\textbf {x}}(n)=B{\textbf {u}}(n-1)+AB{\textbf {u}}(n-2)+\cdots +A^{n-1}B{\textbf {u}}(0)+A^{n}{\textbf {x}}(0)}

or equivalently

x ( n ) − A n x ( 0 ) = [ B A B ⋯ A n − 1 B ] [ u T ( n − 1 ) u T ( n − 2 ) ⋯ u T ( 0 ) ] T . {\displaystyle {\textbf {x}}(n)-A^{n}{\textbf {x}}(0)=[B\,\,AB\,\,\cdots \,\,A^{n-1}B][{\textbf {u}}^{T}(n-1)\,\,{\textbf {u}}^{T}(n-2)\,\,\cdots \,\,{\textbf {u}}^{T}(0)]^{T}.}

Imposing any desired value of the state vector x ( n ) {\displaystyle {\textbf {x}}(n)} on the left side, this can always be solved for the stacked vector of control vectors if and only if the matrix of matrices at the beginning of the right side has full row rank.

For example, consider the case when n = 2 {\displaystyle n=2} and r = 1 {\displaystyle r=1} (i.e. only one control input). Thus, B {\displaystyle B} and A B {\displaystyle AB} are 2 × 1 {\displaystyle 2\times 1} vectors. If [ B A B ] {\displaystyle {\begin{bmatrix}B&AB\end{bmatrix}}} has rank 2 (full rank), and so B {\displaystyle B} and A B {\displaystyle AB} are linearly independent and span the entire plane. If the rank is 1, then B {\displaystyle B} and A B {\displaystyle AB} are collinear and do not span the plane.

Assume that the initial state is zero.

At time k = 0 {\displaystyle k=0} : x ( 1 ) = A x ( 0 ) + B u ( 0 ) = B u ( 0 ) {\displaystyle x(1)=A{\textbf {x}}(0)+B{\textbf {u}}(0)=B{\textbf {u}}(0)}

At time k = 1 {\displaystyle k=1} : x ( 2 ) = A x ( 1 ) + B u ( 1 ) = A B u ( 0 ) + B u ( 1 ) {\displaystyle x(2)=A{\textbf {x}}(1)+B{\textbf {u}}(1)=AB{\textbf {u}}(0)+B{\textbf {u}}(1)}

At time k = 0 {\displaystyle k=0} all of the reachable states are on the line formed by the vector B {\displaystyle B} . At time k = 1 {\displaystyle k=1} all of the reachable states are linear combinations of A B {\displaystyle AB} and B {\displaystyle B} . If the system is controllable then these two vectors can span the entire plane and can be done so for time k = 2 {\displaystyle k=2} . The assumption made that the initial state is zero is merely for convenience. Clearly if all states can be reached from the origin then any state can be reached from another state (merely a shift in coordinates).

This example holds for all positive n {\displaystyle n} , but the case of n = 2 {\displaystyle n=2} is easier to visualize.

Analogy for example of n = 2[edit]

Consider an analogy to the previous example system. You are sitting in your car on an infinite, flat plane and facing north. The goal is to reach any point in the plane by driving a distance in a straight line, come to a full stop, turn, and driving another distance, again, in a straight line. If your car has no steering then you can only drive straight, which means you can only drive on a line (in this case the north-south line since you started facing north). The lack of steering case would be analogous to when the rank of C {\displaystyle C} is 1 (the two distances you drove are on the same line).

Now, if your car did have steering then you could easily drive to any point in the plane and this would be the analogous case to when the rank of C {\displaystyle C} is 2.

If you change this example to n = 3 {\displaystyle n=3} then the analogy would be flying in space to reach any position in 3D space (ignoring the orientation of the aircraft). You are allowed to:

Although the 3-dimensional case is harder to visualize, the concept of controllability is still analogous.

Nonlinear systems in the control-affine form

x ˙ = f ( x ) + ∑ i = 1 m g i ( x ) u i {\displaystyle {\dot {\mathbf {x} }}=\mathbf {f(x)} +\sum _{i=1}^{m}\mathbf {g} _{i}(\mathbf {x} )u_{i}}

are locally accessible about x 0 {\displaystyle x_{0}} if the accessibility distribution R {\displaystyle R} spans n {\displaystyle n} space, when n {\displaystyle n} equals the dimension of x {\displaystyle x} and R is given by:[4]

R = [ g 1 ⋯ g m [ a d g i k g j ] ⋯ [ a d f k g i ] ] . {\displaystyle R={\begin{bmatrix}\mathbf {g} _{1}&\cdots &\mathbf {g} _{m}&[\mathrm {ad} _{\mathbf {g} _{i}}^{k}\mathbf {\mathbf {g} _{j}} ]&\cdots &[\mathrm {ad} _{\mathbf {f} }^{k}\mathbf {\mathbf {g} _{i}} ]\end{bmatrix}}.}

Here, [ a d f k g ] {\displaystyle [\mathrm {ad} _{\mathbf {f} }^{k}\mathbf {\mathbf {g} } ]} is the repeated Lie bracket operation defined by

[ a d f k g ] = [ f ⋯ j ⋯ [ f , g ] ] . {\displaystyle [\mathrm {ad} _{\mathbf {f} }^{k}\mathbf {\mathbf {g} } ]={\begin{bmatrix}\mathbf {f} &\cdots &j&\cdots &\mathbf {[\mathbf {f} ,\mathbf {g} ]} \end{bmatrix}}.}

The controllability matrix for linear systems in the previous section can in fact be derived from this equation.

Controllability via state feedback[edit]

When control authority on a linear dynamical system is exerted through a choice of a time-varying feedback gain matrix K ( t ) {\displaystyle K(t)} , the system

x ˙ = ( A − B K ( t ) ) x {\displaystyle {\dot {\mathbf {x} }}=(A-BK(t))\mathbf {x} }

is nonlinear, in that products of control parameters and states are present. The accessibility distribution R {\displaystyle R} is, as before,

R = [ B A B ⋯ A n − 1 B ] . {\displaystyle R={\begin{bmatrix}B&AB&\cdots &A^{n-1}B\end{bmatrix}}.}

It is clear that for the system to be controllable, it is necessary that R {\displaystyle R} has full column rank. It turns out that this condition is also sufficient. However, the (optimal) control strategy explained earlier needs to be slightly modified so that the trajectory when applying an optimal input to steer the system between the specified states, does not pass through the origin, else the regulating input cannot be written in feedback form u = − K ( t ) x {\displaystyle u=-K(t)\mathbf {x} } .

Collective controllability -- Control of the state transition via feedback[edit]

Collective controllability represents the ability to steer n {\displaystyle n} linear dynamical systems that obey identical dynamics

x ˙ ( i ) ( t ) = A x ( i ) ( t ) + B u ( i ) ( t ) {\displaystyle {\dot {\mathbf {x} }}^{(i)}(t)=A\mathbf {x} ^{(i)}(t)+B\mathbf {u} ^{(i)}(t)}

where n {\displaystyle n} equals the dimension of x {\displaystyle \mathbf {x} } , between specified starting and ending configurations by way of a common state feedback gain matrix K ( t ) {\displaystyle K(t)} , and thereby, each instantiating a control input

u ( i ) ( t ) = K ( t ) x ( i ) ( t ) {\displaystyle \mathbf {u} ^{(i)}(t)=K(t){\mathbf {x} }^{(i)}(t)}

for i ∈ { 1 , … , n } {\displaystyle i\in \{1,\ldots ,n\}} , respectively.

The accessibility distribution R {\displaystyle R} having full column rank is trivially a necessary condition. It is also sufficient, and in fact, the collective is strongly controllable, in that it can be steered from an initial configuration

Φ ( 0 ) = [ x ( 1 ) ( 0 ) … x ( n ) ( 0 ) ] {\displaystyle \Phi (0)={\begin{bmatrix}\mathbf {x} ^{(1)}(0)\ldots \mathbf {x} ^{(n)}(0)\end{bmatrix}}}

to any specified terminal configuration

Φ ( T ) = [ x ( 1 ) ( T ) … x ( n ) ( T ) ] , {\displaystyle \Phi (T)={\begin{bmatrix}\mathbf {x} ^{(1)}(T)\ldots \mathbf {x} ^{(n)}(T)\end{bmatrix}},}

provided det ( Φ ( 0 ) Φ ( T ) ) > 0 {\displaystyle \det(\Phi (0)\Phi (T))>0} , over any specified time interval [ 0 , T ] {\displaystyle [0,T]} through a choice of a common time-varying feedback gain matrix K ( t ) {\displaystyle K(t)} provided R {\displaystyle R} has full column rank [5]

Null Controllability[edit]

If a discrete control system is null-controllable, it means that there exists a controllable u ( k ) {\displaystyle u(k)} so that x ( k 0 ) = 0 {\displaystyle x(k_{0})=0} for some initial state x ( 0 ) = x 0 {\displaystyle x(0)=x_{0}} . In other words, it is equivalent to the condition that there exists a matrix F {\displaystyle F} such that A + B F {\displaystyle A+BF} is nilpotent.

This can be easily shown by controllable-uncontrollable decomposition.

Output controllability[edit]

Output controllability is the related notion for the output of the system (denoted y in the previous equations); the output controllability describes the ability of an external input to move the output from any initial condition to any final condition in a finite time interval. It is not necessary that there is any relationship between state controllability and output controllability. In particular:

For a linear continuous-time system, like the example above, described by matrices A {\displaystyle A} , B {\displaystyle B} , C {\displaystyle C} , and D {\displaystyle D} , the m × ( n + 1 ) r {\displaystyle m\times (n+1)r} output controllability matrix

[ C B C A B C A 2 B ⋯ C A n − 1 B D ] {\displaystyle {\begin{bmatrix}CB&CAB&CA^{2}B&\cdots &CA^{n-1}B&D\end{bmatrix}}}

has full row rank (i.e. rank m {\displaystyle m} ) if and only if the system is output controllable.[1]: 742 

Controllability under input constraints[edit]

In systems with limited control authority, it is often no longer possible to move any initial state to any final state inside the controllable subspace. This phenomenon is caused by constraints on the input that could be inherent to the system (e.g. due to saturating actuator) or imposed on the system for other reasons (e.g. due to safety-related concerns). The controllability of systems with input and state constraints is studied in the context of reachability[6] and viability theory.[7]

Controllability in the behavioral framework[edit]

In the so-called behavioral system theoretic approach due to Willems (see people in systems and control), models considered do not directly define an input–output structure. In this framework systems are described by admissible trajectories of a collection of variables, some of which might be interpreted as inputs or outputs.

A system is then defined to be controllable in this setting, if any past part of a behavior (trajectory of the external variables) can be concatenated with any future trajectory of the behavior in such a way that the concatenation is contained in the behavior, i.e. is part of the admissible system behavior.[8]: 151 

A slightly weaker notion than controllability is that of stabilizability. A system is said to be stabilizable when all uncontrollable state variables can be made to have stable dynamics. Thus, even though some of the state variables cannot be controlled (as determined by the controllability test above) all the state variables will still remain bounded during the system's behavior.[9]

Let T ∈ Т and x ∈ X (where X is the set of all possible states and Т is an interval of time). The reachable set from x in time T is defined as:[3]

R T ( x ) = { z ∈ X : x → T z } {\displaystyle R^{T}{(x)}=\left\{z\in X:x{\overset {T}{\rightarrow }}z\right\}} , where xTz denotes that there exists a state transition from x to z in time T.

For autonomous systems the reachable set is given by :

I m ( R ) = I m ( B ) + I m ( A B ) + . . . . + I m ( A n − 1 B ) {\displaystyle \mathrm {Im} (R)=\mathrm {Im} (B)+\mathrm {Im} (AB)+....+\mathrm {Im} (A^{n-1}B)} ,

where R is the controllability matrix.

In terms of the reachable set, the system is controllable if and only if I m ( R ) = R n {\displaystyle \mathrm {Im} (R)=\mathbb {R} ^{n}} .

Proof We have the following equalities:

R = [ B   A B . . . . A n − 1 B ] {\displaystyle R=[B\ AB....A^{n-1}B]}
I m ( R ) = I m ( [ B   A B . . . . A n − 1 B ] ) {\displaystyle \mathrm {Im} (R)=\mathrm {Im} ([B\ AB....A^{n-1}B])}
d i m ( I m ( R ) ) = r a n k ( R ) {\displaystyle \mathrm {dim(Im} (R))=\mathrm {rank} (R)}

Considering that the system is controllable, the columns of R should be linearly independent. So:

d i m ( I m ( R ) ) = n {\displaystyle \mathrm {dim(Im} (R))=n}
r a n k ( R ) = n {\displaystyle \mathrm {rank} (R)=n}
I m ( R ) = R n ◼ {\displaystyle \mathrm {Im} (R)=\mathbb {R} ^{n}\quad \blacksquare }

A related set to the reachable set is the controllable set, defined by:

C T ( x ) = { z ∈ X : z → T x } {\displaystyle C^{T}{(x)}=\left\{z\in X:z{\overset {T}{\rightarrow }}x\right\}} .

The relation between reachability and controllability is presented by Sontag:[3]

(a) An n-dimensional discrete linear system is controllable if and only if:

R ( 0 ) = R k ( 0 ) = X {\displaystyle R(0)=R^{k}{(0)=X}} (Where X is the set of all possible values or states of x and k is the time step).

(b) A continuous-time linear system is controllable if and only if:

R ( 0 ) = R e ( 0 ) = X {\displaystyle R(0)=R^{e}{(0)=X}} for all e>0.

if and only if C ( 0 ) = C e ( 0 ) = X {\displaystyle C(0)=C^{e}{(0)=X}} for all e>0.

Example Let the system be an n dimensional discrete-time-invariant system from the formula:

ϕ ( n , 0 , 0 , w ) = ∑ i = 1 n A i − 1 B w ( n − 1 ) {\displaystyle \phi (n,0,0,w)=\sum \limits _{i=1}^{n}A^{i-1}Bw(n-1)} (Where ϕ {\displaystyle \phi } (final time, initial time, state variable, restrictions) is defined as the transition matrix of a state variable x from an initial time 0 to a final time n with some restrictions w).

It follows that the future state is in R k ( 0 ) {\displaystyle R^{k}{(0)}} if and only if it is in I m ( R ) {\displaystyle \mathrm {Im} (R)} , the image of the linear map R {\displaystyle R} , defined as:

R ( A , B ) ≜ [ B   A B . . . . A n − 1 B ] {\displaystyle R(A,B)\triangleq [B\ AB....A^{n-1}B]} ,

which maps,

u n ↦ X {\displaystyle u^{n}\mapsto X}

When u = K m {\displaystyle u=K^{m}} and X = K n {\displaystyle X=K^{n}} we identify R ( A , B ) {\displaystyle R(A,B)} with a n × n m {\displaystyle n\times nm} matrix whose columns are B ,   A B , . . . . , A n − 1 B {\displaystyle B,\ AB,....,A^{n-1}B} in that order. If the system is controllable the rank of [ B   A B . . . . A n − 1 B ] {\displaystyle [B\ AB....A^{n-1}B]} is n {\displaystyle n} . If this is true, the image of the linear map R {\displaystyle R} is all of X {\displaystyle X} . Based on that, we have:

R ( 0 ) = R k ( 0 ) = X {\displaystyle R(0)=R^{k}{(0)=X}} with X ∈ R n {\displaystyle X\in \mathbb {R} ^{n}} .

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4