Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Taming Heterogeneity and Complexity of Embedded Control
Taming Heterogeneity and Complexity of Embedded Control
Taming Heterogeneity and Complexity of Embedded Control
Ebook1,124 pages11 hours

Taming Heterogeneity and Complexity of Embedded Control

Rating: 0 out of 5 stars

()

Read preview

About this ebook

This book gathers together a selection of papers presented at the Joint CTS-HYCON Workshop on Nonlinear and Hybrid Control held at the Paris Sorbonne, France, 10-12 July 2006. The main objective of the Workshop was to promote the exchange of ideas and experiences and reinforce scientific contacts in the large multidisciplinary area of the control of nonlinear and hybrid systems.
LanguageEnglish
PublisherWiley
Release dateMay 21, 2013
ISBN9781118615133
Taming Heterogeneity and Complexity of Embedded Control

Related to Taming Heterogeneity and Complexity of Embedded Control

Related ebooks

Electrical Engineering & Electronics For You

View More

Related articles

Reviews for Taming Heterogeneity and Complexity of Embedded Control

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Taming Heterogeneity and Complexity of Embedded Control - Françoise Lamnabhi-Lagarrigu

    Ellipsoidal Output-Feedback Sets for ∞Control of a Class of Stochastic Hybrid Systems with State-Dependent Noise

    Samir Aberkane – Jean Christophe Ponsart – Dominique Sauter

    CRAN – CNRS UMR 7039

    Université Henri Poincaré, Nancy 1, BP 239,

    F-54506 Vandoeuvre-lès-Nancy Cedex

    Tel.: +33 3 83 68 44 80, Fax: +33 3 83 68 44 62

    e-mail: samir.aberkane@cran.uhp-nancy.fr


    ABSTRACT. This paper deals with static output feedback ∞ control of continuous time Active Fault Tolerant Control Systems with Markovian Parameters (AFTCSMP) and state-dependent noise. It adopts a new framework based on the synthesis of ellipsoidal sets of controllers. It is also shown that the obtained results can easily be applied to the problematic of modeindependent static output feedback ∞ control of another class of stochastic hybrid systems known as Markovian Jump Linear Systems. Results are formulated as matrix inequalities, one of which is nonlinear. A numerical algorithm based on nonconvex optimization is provided and its running is illustrated on classical examples from literature.

    KEYWORDS: Stochastic Hybrid Systems, Markovian Jumping Parameters, ∞ Control, Static Output Feedback, Ellipsoidal Sets, Linear Matrix Inequalities (LMI)


    1. Introduction

    As performance requirements increase in advanced technological systems, their associated control systems are becoming more and more complex. At the same time, complicated systems could have various consequences in the event of component failures. Therefore, it is very important to consider the safety and fault tolerance of such systems at the design stage. For these safety-critical systems, Fault Tolerant Control Systems (FTCS) have been developed to meet these essential objectives. FTCS have been a subject of great practical importance, which has attracted a lot of interest for the last three decades. A bibliographical review on reconfigurable fault tolerant control systems can be found in [ZHA 03].

    Active fault tolerant control systems are feedback control systems that reconfigure the control law in real time based on the response from an automatic fault detection and identification (FDI) scheme. The dynamic behaviour of Active Fault Tolerant Control Systems (AFTCS) is governed by stochastic differential equations and can be viewed as a general hybrid system [SRI 93]. A major class of hybrid systems is Markovian Jump Linear Systems (MJLS). In MJLS, a single jump process is used to describe the random variations affecting the system parameters. This process is represented by a finite state Markov chain and is called the plant regime mode. The theory of stability, optimal control and 2/ ∞ control, as well as important applications of such systems, can be found in several papers in the current literature, for instance in [BOU 06, BOU 05, BOU 99, COS 99, FAR 00, SOU 93, JI 90, JI 92].

    To deal with AFTCS, another class of hybrid systems was defined, denoted as AFTCSMP. In this class of hybrid systems, two random processes are defined: the first random process represents system components failures and the second random process represents the FDI process used to reconfigure the control law. This model was proposed by Srichander and Walker [SRI 93]. Necessary and sufficient conditions for stochastic stability of AFTCSMP were developed for a single component failure (actuator failures). The problem of stochastic stability of AFTCSMP in the presence of noise, parameter uncertainties, detection errors, detection delays and actuator saturation limits has also been investigated in [MAH 99a, MAH 01, MAH 03]. Another issue related to the synthesis of fault tolerant control laws was also addressed by [MAH 99b, SHI 97, SHI 03]. In [MAH 99b], the authors designed an optimal control law for AFTCSMP using the matrix minimum principle to minimize an equivalent deterministic cost function. The problem of ∞ and robust ∞ control was treated in [SHI 97, SHI 03] for both continuous and discrete time AFTCSMP. The authors showed that the state feedback control problem can be solved in terms of the solutions of a set of coupled Riccati equations. The dynamic/static output feedback counterpart was treated by [ABE 05b, ABE 05c, ABE 05a] in a convex programming framework. Indeed, the authors provide an LMI characterization of dynamical/static output feedback compensators that stochastically stabilize (robustly stabilize) the AFTCSMP and ensures ∞ (robust ∞) constraints. In addition, it is important to mention that the design problem in the framework of AFTCSMP remains an open and challenging problematic. This is due, particulary, to the fact that the controller only depends on the FDI process i.e. the number of controllers to be designed is less than the total number of the closed loop systems modes by combining both failure an FDI processes. The design problem involves searching feasible solutions of a problem where there are more constraints than variables to be solved. Generally speaking, there lacks tractable design methods for this stochastic FTC problem. Indeed, in [ABE 05b, ABE 05c, MAH 03, SHI 97, SHI 03], the authors make the assumption that the controller must access both failures and FDI processes. However, this assumption is too restrictive to be applicable in practical FTC systems. In this note, the assumption on the availability of failure processes, for the synthesis purposes, is stressed.

    On the other hand, one of the most challenging open problems in control theory is the synthesis of fixed-order or static output feedback controllers that meet desired performances and specifications [SYR 97]. In all variations of this problem, this note is concerned with the problem of static output feedback ∞ control of continuous time AFTCSMP with state-dependent noise. This problematic is addressed under a new framework, based on the synthesis of ellipsoidal sets of controllers, introduced in [PEA 02, PEA 05]. The problematic resulting from the fact that the controller only depends on the FDI process is shown to be naturally dealt with in this context. It is also shown that the obtained results can easily be applied to the problematic of modeindependent static output feedback ∞ control of MJLS. Results are formulated as matrix inequalities one of which is nonlinear. A numerical algorithm based on nonconvex optimization is provided and its running is illustrated on classical examples from literature.

    This paper is organized as follows: Section 2 describes the dynamical model of the system with appropriately defined random processes. A brief summary of basic stochastic terms, results and definitions are given in Section 3. Section 4 addresses the internal stochastic stabilization of the AFTCSMP. Sections 5 considers the ∞ control problem for the output feedback. In Section 6, a numerical algorithm based on nonconvex optimization is provided and its running is illustrated on classical examples from literature. Finally, a conclusion is given in Section 7.

    Notations. The notations in this paper are quite standard. m×n is the set of m-by-n real matrices and n is the subset of symmetric matrices in n×n. A′ is the transpose of the matrix A. The notation X ≥ Y (X > Y, respectively), where X and Y are symmetric matrices, means that X − Y is positive semi-definite (positive definite, respectively); and 0 are identity and zero matrices of appropriate dimensions, respectively; {·} denotes the expectation operator with respect to some probability measure P; L²[0,∞) stands for the space of square-integr able vector functions over the interval [0,∞); · refers to either the Euclidean vector norm or the matrix norm, which is the operator norm induced by the standard vector norm; · 2 stands for the norm in L²[0,∞); while · 2 denotes the norm in L²((Ω, , P), [0,∞)); (Ω, , P) is a probability space. In block matrices, indicates symmetric terms:

    2. Dynamical Model of the AFTCSMP with Wiener Process

    To describe the class of linear systems with Markovian jumping parameters that we deal with in this paper, let us fix a complete probability space (Ω, ,P). This class of systems owns a hybrid state vector. The first component vector is continuous and represents the system states, and the second one is discrete and represents the failure processes, affecting the system, and the FDI process. The dynamical model of the AFTCSMP with state-dependent noise, defined in the fundamental probability space (Ω, ,P), is described by the following differential equations:

    (1)

    where x(t) ∈ n is the system state, u(y(t), ψ(t), t) ∈ r is the system input, y(t) ∈ q is the system measured output, z(t) ∈ p is the controlled output, w(t) ∈ m is the system external disturbance, ξ(t), η(t) and ψ(t) represent the plant component failure process, the actuator failure process and the FDI process, respectively. ξ(t), η(t) and ψ(t) are separable and mesurable Markov processes with finite state spaces Z = {1, 2, …, z}, S = {1, 2, …, s} and R = {1, 2, …, r}, respectively. (t) = [ 1(t) … v(t)]′ is a v-dimensional standard Wiener process on a given probability space (Ω, , P), that is assumed to be independent of the Markov processes. The matrices A(ξ(t)), B(η(t)), E(ξ(t), η(t)), D2(ξ(t), η(t)), D1(η(t)) and l(ξ(t), η(t)) are properly dimensioned matrices which depend on random parameters.

    Remark 1. For the existence and uniqueness of the solution of (1), we refer the reader to [ARN 74, KUS 67], and the references therein.

    In AFTCS, we consider that the control law is only a function of the mesurable FDI process ψ(t). Therefore, we introduce a static output feedback compensator (φ s) of the form:

    (2)

    Applying the controller φs to the AFTCSMP φ, we obtain the following closed loop system:

    (3)

    where

    For notational simplicity, we will denote A(ξ(t)) = Ai when ξ(t) = i Z, B(η(t)) = Bj and D1(η(t)) = D1j when η(t) = j S, E(ξ(t), η(t)) = Eij , D2(ξ(t), η(t)) = D2ij and (ξ(t), η(t)) = ij when ξ(t) = i Z, η(t) = j S and K(ψ(t)) = Kk when ψ(t) = k R. We also denote x(t) = xt, y(t) = yt, z(t) = zt, w(t) = wt, ξ(t) = ξt, η(t) = ηt, ψ(t) = ψt and the initial conditions x(t0) = x0, ξ(t0) = ξ0, η(t0) = η0 and ψ(t0) = ψ0.

    The FDI and the Failure Processes

    ξ(t), η(t) and ψ(t) being homogeneous Markov processes with finite state spaces, we can define the transition probability of the plant components failure process as [MAH 03, SRI 93]:

    The transition probability of the actuator failure process is given by:

    where πij is the plant components failure rate, and νkl is the actuator failure rate. Δt is the infinitesimal transition time interval and ot) is composed of infinitesimal terms of order higher than that of Δt.

    Given that ξ = k and η = l, the conditional transition probability of the FDI process ψ(t) is:

    Here, λ represents the transition rate from i to v for the Markov process ψ(t) conditioned on ξ = k Z and η = l ∈ S. Depending on the values of i, v R, k Z and l S, various interpretations, such as rate of false detection and isolation, rate of correct detection and isolation, false alarm recovery rate, etc, can be given to [MAH 03, SRI 93].

    3. Definitions

    In this section, we will first give some basic definitions related to stochastic stability notions and then we will summarize some results about exponential stability in the mean square sense of the AFTCSMP with Wiener process. Without loss of generality, we assume that the equilibrium point, x = 0, is the solution at which stability properties are examined.

    3.1. Stochastic Stability

    Definition 1.

    System (3) is said to be:

    (i) stochastically stable (SS) if there exists a finite positive constant K(x0, ξ0, η0, ψ0) such that the following holds for any initial conditions (x0, ξ0, η0, ψ0):

    (4)

    (ii) internally exponentially stable in the mean square sense (IESS) if it is exponentially stable in the mean square sense for wt = 0, i.e. for any ξ0, η0, ψ0 and some γ(ξ0, η0, ψ0), there exists two numbers a > 0 and b > 0 such that when x0 ≤ γ(ξ0, η0, ψ0), the following inequality holds t t0 for all solution of (3) with initial condition x0:

    (5)

    The following theorem gives a sufficient condition for internal exponential stability in the mean square sense for the system (φ) coupled with (φs).

    Theorem 1. The solution x = 0 of the system (φ) coupled with (φs) is internally exponentially stable in the mean square sense for t ≥ t0 if there exists a stochastic Lyapunov function ϑ(xt, ξt, ηt, ψt, t) such that

    (6)

    and

    (7)

    for some positive constants K1, K2 and K3, where is the weak infinitesimal operator of the joint Markov process {xt, ξt, ηt, ψt}.

    A necessary condition for internal exponential stability in the mean square sense for the system (φ) coupled with (φs) is given by theorem 2.

    Theorem 2. If the solution x = 0 of the system (φ) coupled with (φs) is internally exponentially stable in the mean square sense, then for any given quadratic positive definite function W(xt, ξt, ηt, ψt, t) in the variables x which is bounded and continuous ∀t ≥ t0, ∀ξt ∈ Z, ∀ηt ∈ S and ∀ψt ∈ R, there exists a quadratic positive definite function ϑ(xt, ξt, ηt, ψt, t) in x that satisfies the conditions in theorem 1 and is such that ϑ(xt, ξt, ηt, ψt, t) = −W(xt, ξt, ηt, ψt, t).

    Remark 2. The proofs of these theorems follow the same arguments as in [MAH 03, SRI 93] for their proposed stochastic Lyapunov functions, so they are not shown in this paper to avoid repetition.

    The following proposition gives a necessary and sufficient condition for internal exponential stability in the mean square sense for the system (3).

    Proposition 1. A necessary and sufficient condition for internal exponential stability in the mean square sense of the system (3) is that there exist symmetric positivedefinite matrices ijk, i Z, j ∈ S and k ∈ R such that:

    (8)

    i ∈ Z, j ∈ S and k ∈ R, where

    (9)

    Proof. The proof of this proposition is easily deduced from theorems 1 and 2.

    Proposition 2. If the system (3) is internally exponentially stable in the mean square sense, then it is stochastically stable.

    Proof. The proof of this proposition follows the same lines as for the proof of proposition 4 in [ABE 05a].

    3.2. Matrix Ellipsoids

    Through this note, a particular set of matrices is used. Due to the notations and by extension of the notion of n ellipsoids, these sets are referred to as matrix ellipsoids of (m×p)

    Definition 2. [PEA 00, PEA 02, PEA 05] Given three matrices q, ∈ q×r and r, the { , , }-ellipsoid of r×q is the set of matrices satisfying the following matrix inequalities:

    (10)

    By definition, 0 = − −1 ′ is the center of the ellipsoid and R = ′0 − is the radius. Inequalities (10) can also be written as

    (11)

    This definition shows that matrix ellipsoids are special cases of matrix sets defined by quadratic matrix inequality. Some properties of these sets are:

    i) A matrix ellipsoid is a convex set;

    ii) the { , , }-ellipsoid is nonempty iff the radius (R ≥ 0) is positive semi definite. This property can also be expressed as

    (12)

    4. Stochastic Stabilization

    In this section, we shall address the problem of finding all static compensators (φs), as defined in section 2, such that the system (φ) coupled with (φs) becomes internally exponentially stochastically stable in the mean square sense. To this end, we use proposition 1 to get the following necessary and sufficient conditions for the internal exponential stability in the mean square sense of the system (3).

    Proposition 3. System (3) is internally exponentially stabilisable in the mean square sense by static output-feedback if and only if there exist matrices , and that simultaneously satisfy the following LMI constraints

    (13)

    and the nonlinear inequalities constraints

    (14)

    i Z, j S and k R, where

    (15)

    Let be a solution, then the nonempty -ellipsoids are sets of stabilizing gains.

    Proof. The proof of this proposition follows essentially the same arguments as in [PEA 05]:

    Sufficiency. Assume that the constraints (13)-(14) are satisfied for some matrices. Due to the properties of matrix ellipsoids, the -ellipsoids are nonempty. Take any element k. The LMI (13) implies that for all

    (16)

    Definition 2 implies that for all nonzero trajectories

    (17)

    i Z, j ∈ S and k ∈ R.

    Then, the closed-loop exponential stochastic stability is assessed by proposition 1 for the quadratic stochastic Lyapunov function

    Necessity. Assume k are stabilizing static output feedback gains and ϑ(ξt, ηt, ψt) = is a stochastic Lyapunov fonction. Then from proposition 1, we have

    (18)

    i Z, j ∈ S and k ∈ R.

    Applying the well known Finsler Lemma [SKE 98], there exist scalars τijk such that

    (19)

    where . The inequality (13) is obtained with

    The bottom-right block implies 0 < k. Hence the proof is complete.

    Remark 3. The results developed above can be easily applied to the mode-independent static output feedback stochastic stabilization of MJLS. Indeed, let us consider the following closed loop dynamical model

    (20)

    where

    The process ϕt represents a continuous time discrete state Markov process with values in a finite set H = {1, …, h} with transition probability rate matrix Ξ = [Φ]i,j=1,…,h. In this case, the transition probability for the jump process, ϕt, can be defined as:

    (21)

    with

    Then, the following corollary can be stated.

    Corollary 1. System (20) is internally exponentially stabilisable in the mean square sense by static output-feedback if and only if there exist matrices , and = ′ > 0 that simultaneously satisfy the following LMI constraints

    (22)

    and the nonlinear inequality constraint

    (23)

    i H, where

    (24)

    Let { i, , , } be a solution, then the nonempty { , , }-ellipsoid is a set of stabilizing gains.

    5. The ∞ Control Problem

    Let us consider the system (3) with

    z∞(t) stands for the controlled output related to H∞ performance.

    In this section, we deal with the design of controllers that stochastically stabilize the closed-loop system and guarantee the disturbance rejection, with a certain level γ∞ > 0. Mathematically, we are concerned with the characterization of compensators φs that stochastically stabilize the system (3) and guarantee the following for all w L²[0,∞):

    (25)

    where γ∞ > 0 is a prescribed level of disturbance attenuation to be achieved. To this end, we need the auxiliary result given by the following proposition.

    Proposition 4. If there exist symmetric positive-definite matrices ∞ijk, i ∈ Z, j ∈ S and k R such that

    (26)

    where

    i ∈ Z, j ∈ S and k ∈ R.

    then the system (3) is stochastically stable and satisfies

    (27)

    Proof. See [ABE 05a].

    Using the previous proposition, the following ∞ control result can be stated.

    Proposition 5. If there exist matrices and that simultaneously satisfy the following LMI constraints

    (28)

    and the nonlinear inequalities constraints

    (29)

    i ∈ Z, j ∈ S and k ∈ R, where

    then the -ellipsoids are sets of stabilizing gains such that

    (30)

    Proof. The proof of this proposition follows the same arguments as for the proof of proposition 3.

    Remark 4. As for the internal stochastic stabilization problematic, the mode-independent static output feedback ∞ control of MJLS can be solved in the same way as for AFTCSMP. This result is illustrated by corollary 2.

    Corollary 2. If there exist matrices and = ′ > 0 that simultaneously satisfy the following LMI constraints

    (31)

    and the nonlinear inequality constraint

    (32)

    i ∈ H, where

    then the { , , }-ellipsoid is a set of stabilizing gains such that

    (33)

    6. Computational Issues and Examples

    6.1 A Cone Complementary Algorithm

    The numerical examples are solved using a first order iterative algorithm. It is based on a cone complementary technique [GHA 97], that makes it possible to concentrate the nonconvex constraint in the criterion of some optimisation problem.

    Lemma 1. The problem (28)-(29) is feasible if and only if zero is the global optimum of the optimisation problem

    (34)

    where

    Proof. The proof of this Lemma follows the same arguments as in [PEA 05]. With the constraints k ≥ 0 and k ≥ 0, we have that ≥ 0 and ≥ 0 which induce the following implications

    (35)

    Therefore, after some manipulations, one gets

    Thus the nonlinear constraints is satisfied

    The converse implication is proved taking and k such that k k = 0, ∀k R.

    As in [GHA 97, PEA 05], the optimisation problem (34) can then be solved with a first order conditional gradient algorithm also known as the Franck andWolfe feasible direction method. Its properties are not reminded here. Not only that the nonlinear objective tr( ) is relaxed as the linear objective tr( h + h). The obtained LMI optimisation is repeated iteratively with matrices h and h computed from each previous optimisation step. The obtained sequence, tr( h h), is strictly decreasing. However, there is no guarantee that the algorithm converges to the global optimum.

    Remark 5. [PEA 05] The stoping criteria of the usual gradient algorithm is either related to slow progress of the optimisation objective or to the achievement of tr( ) = 0. In the first case, the algorithm fails due to "plateauing" behavior or because it found a non satisfactory local optimum. The second case corresponds to the expected success of the algorithm. Unfortunately, due to the constraints ≥ 0 and ≥ 0 the algorithm is more often stopped while tr( ) = where is a chosen accuracy level. The exact non linear constraint may then not be exactly satisfied which is a significant weakness of the algorithm.

    As a matter of fact, since the equality constraints involving are not the goal of the original problem (28)-(29), in the numerical example below we adopted the following stoping criteria for the conditional gradient algorithm.

    • If tr is below a chosen level, then STOP, the algorithm failed.

    • As soon as , STOP, required ellipsoids are found.

    6.2 Numerical Examples

    a) Fault Tolerant Control

    In this section, the proposed ∞ static output feedback control of AFTCSMP is illustrated using a flight control example. Consider the nominal system with

    This model is adapted from [MAK 04]. It represents the lateral-directional dynamics of McDonnell F-4C Phantom flying at Mach 0.6 at an altitude of 35000 ft. The states xi, i = 1, … , 5 denote the lateral velocity (ft per second), the roll rate (radian per second), yaw rate (radian per second), roll angle (radian) and yaw angle (radian), respectively. The control inputs u1, u2 and u3 correspond to the left aileron, the right aileron and the rudder surface displacement, respectively. For illustration purposes, we will consider two faulty modes:

    i) Mode 2: A 50% power loss on the left aileron;

    ii) Mode 3: Right aileron outage.

    Table 1. Numerical experiments

    From the above, we have that S = {1, 2, 3}, where mode 1 represents the nominal case. The failure process is assumed to have Markovian transition characteristics. The FDI process is also Markovian with three states R = {1, 2, 3}. The actuator failure rates are assumed to be:

    The FDI conditional transition rates are:

    For the above AFTCSMP, several numerical experiments are performed using the cone complementary algorithm. These tests are realised for various specifications on the ∞ performance (γ∞). Here are presented some cases described in Table 1, where iter is the number of the algorithms iterations, time is the computation time (LMIs solved with LMI toolbox, Matlab 6.5.1), Tr( ) is the value of the optimisation criteria trace ( k k) at the step when the algorithm stopped, and k0, k = 1, 2, 3 are the controllers obtained as the centers of the stabilising ellipsoids.

    b) Mode-Independent Control of MJLS

    We applied the proposed static output feedback ∞ control to a VTOL helicopter model adapted from [FAR 00]. The dynamics can be written as

    where ϕt indicates the airspeed. The parameters are given by

    The behavior of φt is modelled as a Markov chain with three different states, corresponding to airspeeds of 135 (nominal value), 60, 170, Knots. The values of parameters a32, a34, and b32 are shown in Table 2.

    Table 2. Parameters

    The transition matrix is given by

    As for the previous example, several numerical experiments are performed using the cone complementary algorithm. These tests are realised for various specifications on the ∞ performance (γ∞). Here are presented some cases described in Table 3, where 0 is the controller obtained as the center of the stabilising ellipsoid.

    Table 3. Numerical experiments

    7. Conclusion

    In this paper, the static output feedback ∞ control of continuous time AFTCSMP was considered within a new framework. This last one is based on the synthesis of ellipsoidal sets of controllers and was introduced by [PEA 02, PEA 05]. The problematic resulting from the fact that the controller only depends on the FDI process is shown to be naturally dealt with in this context. It was also shown that the obtained results could easily be applied to the problem of mode-independent static output feedback ∞ control of Markovian Jump Linear Systems. The numerical resolution of the obtained results was done using a cone complementary algorithm and its running was illustrated on classical examples from literature.

    8. References

    [ABE 05a] ABERKANE S., PONSART J., SAUTER D., Output Feedback ∞ Control of a Class of Stochastic Hybrid Systems withWiener Process via Convex Analysis, Submitted, 2005.

    [ABE 05b] ABERKANE S., PONSART J., SAUTER D., Output Feedback Stochastic Stabilization of Active Fault Tolerant Control Systems: LMI Formulation, 16th IFAC World Congress, Prague, Czech Republic, 2005.

    [ABE 05c] ABERKANE S., SAUTER D., PONSART J., ∞ Stochastic Stabilization of Active Fault Tolerant Control Systems: Convex Approach, 44th IEEE Conference on Decision and Control and European Control Conference ECC 2005, Seville, Spain, 2005.

    [ARN 74] ARNOLD L., Stochastic Differential Equations: Theory and Applications, John Wiley & Sons, New York, 1974.

    [BOU 99] BOUKAS E. K., Exponential stabilizability of stochastic systems with Markovian jumping parameters, Automatica, vol. 35, 1999, p. 1437–1441.

    [BOU 05] BOUKAS E. K., Stabilization of Stochastic Nonlinear Hybrid Systems, Int. J. Innovative Computing, Information and Control, vol. 1, 2005, p. 131–141.

    [BOU 06] BOUKAS E. K., Static Output Feedback Control for Stochastic Hybrid Systems: LMI Approach, Automatica, vol. 42, 2006, p. 183–188.

    COS 99] COSTA O. L. V., DO VAL J. B. R., GEROMEL J. C., Continuous-time statefeedback 2-control of Markovian jump linear systems via convex analysis, Automatica, vol. 35, 1999, p. 259–268.

    [FAR 00] DE FARIAS D. P., GEROMEL J. C., DO VAL J. B. R., COSTA O. L. V., Output Feedback Control of Markov Jump Linear Systems in Continuous-Time, IEEE Transactions on Automatic Control, vol. 45, 2000, p. 944–949.

    [GHA 97] GHAOUI L. E., OUSTRY F., AITRAMI M., A Cone Complementary Linearization Algorithm for Static Output-Feedback and related Problems, IEEE Transactions on Automatic Control, vol. 42, 1997, p. 1171–1176.

    [JI 90] JI Y., CHIZECK H. J., Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control, IEEE Transactions on Automatic Control, vol. 35, 1990, p. 777–788.

    [JI 92] JI Y., CHIZECK H. J., Jump linear quadratic Gaussian control in continuous time, IEEE Transactions on Automatic Control, vol. 375, 1992, p. 1884–1892.

    [KUS 67] KUSHNER H. J., Stochastic Stability and Control, Mathematics in Science and Engineering, vol. 33, Academic Press, New York, 1967.

    [MAH 99a] MAHMOUD M., JIANG J., ZHANG Y., Analysis of the Stochastic Stability for Active Fault Tolerant Control Systems, Proceedings of the 38th IEEE Conference on Decision & Control, Phoenix, Arizona USA, 1999.

    [MAH 99b] MAHMOUD M., JIANG J., ZHANG Y., Optimal Control Law for Fault Tolerant Control Systems, Proceedings of the 39th IEEE Conference on Decision & Control, Sydney, Australia, 1999.

    [MAH 01] MAHMOUD M., JIANG J., ZHANG Y., Stochastic Stability Analysis of Active Fault-Tolerant Control Systems in the Presence of Noise, IEEE Transactions on Automatic Control, vol. 46, 2001, p. 1810–1815.

    [MAH 03] MAHMOUD M., JIANG J., ZHANG Y., Active Fault Tolerant Control Systems: Stochastic Analysis and Synthesis, Springer, 2003.

    [MAK 04] MAKI M., JIANG J., HAGINO K., A Stability Guaranteed Active Fault Tolerant Control System Against Actuator Failures, International Journal of Robust and Nonlinear Control, vol. 14, 2004, p. 1061–1077.

    [PEA 00] PEAUCELLE D., Formulation générique de problèmes en analyse et commande robuste par des fonctions de Lyapunov dépendant des paramètres, Thèse de doctorat, Université Toulouse III-Paul Sabatier, Toulouse, France, 2000.

    [PEA 02] PEAUCELLE D., ARZELIER D., BERTRAND R., Ellipsoidal Sets for Static Output- Feedback, 15th IFAC World Congress, Barcelona, Spain, 2002.

    [PEA 05] PEAUCELLE D., ARZELIER D., Ellipsoidal Sets for Resilient and Robust Static Output-Feedback, IEEE Transaction on Automatic Control, vol. 50, 2005, p. 899–904.

    [SHI 97] SHI P., BOUKAS E. K., ∞-Control for Markovian Jumping Linear Systems with Parametric Uncertainty, Journal of Optimization Theory and Applications, vol. 95, 1997, p. 75–99.

    [SHI 03] SHI P., BOUKAS E. K., NGUANG S. K., GUO X., Robust disturbance attenuation for discrete-time active fault tolerant control systems with uncertainties, Optimal Control Applications and Methods, vol. 24, 2003, p. 85–101.

    [SKE 98] SKELTON R. E., IWASAKI T., GRIGORIADIS K., A Unified Algebraic Approach to Linear Control Design, Taylor and Francis, 1998.

    [SOU 93] DE SOUZA C. E., FRAGOSO M. D., ∞ Control For Linear Systems With Markovian Jumping Parameters, Control Theory and Advanced Technology, vol. 9, 1993, p. 457–466.

    [SRI 93] SRICHANDER R.,WALKER B. K., Stochastic stability analysis for continuous-time fault tolerant control systems, International Journal of Control, vol. 57, 1993, p. 433–452.

    [SYR 97] SYRMOS V. L., ABDALLAH C. T., DORATO P., GRIGORIADIS K., Static Output Feedback: A Survey, Automatica, vol. 33, 1997, p. 125–137.

    [ZHA 03] ZHANG Y., JIANG J., Bibliographical review on reconfigurable fault-tolerant control systems, IFAC SAFEPROCESS,Washington, 2003.


    A Contribution to the Study of Periodic Systems in the Behavioral Approach

    José Carlos Aleixo¹ — Jan Willem Polderman² — Paula Rocha

    ³

    ¹ Department of Mathematics

    University of Beira Interior

    Rua Marquês d’Ávila e Bolama

    6201-001 Covilhã – Portugal

    jcaleixo@mat.ubi.pt

    ² Department of Applied Mathematics

    University of Twente

    P.O. Box 217

    7500 AE Enschede – The Netherlands

    j.w.polderman@math.utwente.nl

    ³ Department of Mathematics

    University of Aveiro

    3810-193 Aveiro – Portugal

    procha@mat.ua.pt


    ABSTRACT. In this paper we obtain new results on periodic kernel representations and propose a definition of image representation for periodic behaviors. Further, we characterize controllability and autonomicity in representation terms. We also show that the concept of free variable used in the time-invariant case cannot be carried over to the periodic case in a straightforward manner and introduce a new concept of variable freeness (P-periodic freeness). This allows us to define input/output structures for periodic behaviors.

    KEYWORDS: Periodic systems; behaviors; representations


    1. Introduction

    In this paper we present new results on the behavioral theory of linear periodically time-varying systems based on the framework developed by Kuijper and Willems, see [KUI 97] and the references therein. This approach uses a technique known as lifting, that associates to each periodic behavior a time-invariant one. This allows to derive many results for periodic systems based on the existing ones for the timeinvariant case. Using this technique, we present some further insights into (what we call) kernel and image representations. Moreover we study the structural properties of controllability and autonomicity and provide a characterization of these properties in terms of those representations. We show that, analogous to what happens for time-invariant systems, the correspondence between controllability and the existence of image representations also holds for periodic behaviors. However, in spite of the many formal resemblances, there are some fundamental differences between timeinvariant and periodic behaviors. This is, for instance, the case with the relationship between free variables, controllability and autonomicity. Indeed, as we shall see, the usual definition of freeness used in the time-invariant case is not suitable for periodic systems. In order to overcome this complication we introduce a new concept of periodically free variable, which also allows us to define inputs and outputs in a periodic system.

    2. Periodic behavioral systems

    In the behavioral framework a dynamical system Σ is defined as a triple Σ = ( , , ), with ⊆ as the time set, as the signal space and as the behavior. Here we focus on the discrete-time case, that is, = , assuming furthermore that our space of external variables is = q with q∈ +.

    Let the λ-shift

    be defined by

    Whereas the behavior of a time-invariant system is characterized by its invariance under the time shift, that is,

    periodic behaviors are characterized by their invariance with respect to the P-shift (P ∈ ), as stated in the next definition.

    DEFINITION 2.1 [KUI 97] A system Σ is said to be P-periodic (with P ∈ ) if its behavior satisfies σP = .

    3. P-periodic kernel representations – PPKR

    According to [KUI 97] and [WIL 91], a behavior B is a σ P -invariant linear closed subspace of ( q) (in the topology of point-wise convergence) if and only if it has a representation of the type

    (1)

    where Rt gt×q [ξ, ξ−1]. Note that the Laurent-polynomial matrices Rt need not have the same number of rows (in fact we could even have some g t equal to zero, meaning that the corresponding matrix Rt would be void and no restrictions were imposed at the time instants Pk+t). Analogously to the time-invariant case, although with some abuse of language, we refer to (1) as a P-periodic kernel representation (PPKR).

    A common approach in dealing with periodic systems is to relate them with suitable time-invariant ones. Here, following [KUI 97], we associate with a P-periodic behavior a time-invariant behavior , the lifted-behavior, defined by

    where L is the linear map

    defined by

    Note that, since

    the P-periodic kernel representation (1) can be written as

    (2)

    where

    (3)

    with . From now on we refer to the matrix R (ξ, ξ−1) as a PPKR matrix of the corresponding behavior.

    Decomposing R (ξ, ξ−1) as

    (4)

    with

    (5)

    and

    (6)

    and recalling the definition of the lifted trajectory Lw associated to w, (2) can be written as

    This allows us to conclude that the lifted behavior L is given by the kernel representation

    Taking into account that this reasoning can be reversed, we obtain the following result.

    LEMMA 3.1 [KUI 97] A P-periodic behavior is given by the kernel representation (1), that is,

    where , is given as in (6).

    By using this one-to-one relation, some known results for the time-invariant case can be somehow mimicked into the P-periodic case. For instance, this happens with two important issues which are the questions of kernel representation equivalence and minimality.

    THEOREM 3.2 [ALE 05] Let and ′ be two P-periodic behaviors with representation matrices R (ξ, ξ−1) and R′ (ξ, ξ−1), respectively. Then ⊂ ′ if and only if there exists a Laurent-polynomial matrix L (ξ, ξ−1) such that

    This result can be proven as follows. Consider the time-invariant behaviors L = ker RL and L = ker R′L associated with and , respectively. Then ⊂ ′ if and only if L L , meaning that there exists a Laurent-polynomial matrix L (ξ, ξ−1) such that R′L = LRL, which implies the desired relation between R′ and R.

    This theorem yields the following fundamental result, which is the counterpart for P-periodic behaviors of a similar result for time-invariant behaviors, [POL 98, Theorem 3.6.2].

    THEOREM 3.3 [ALE 05] Let and be two P-periodic behaviors with representation matrices R (ξ, ξ−1) and R (ξ, ξ−1), respectively, possessing the same number of rows. Then = if and only if there exists a unimodular matrix U (ξ, ξ−1) such that

    As for the question of minimality, given a linear time-invariant system with behavior described by:

    (7)

    with R (ξ, ξ−1) ∈ g×q (ξ, ξ−1), we say that the representation (7) is minimal if the number of rows of the matrix R (ξ, ξ−1) is minimal (among all the other representations of ). This is equivalent to say that R (ξ, ξ−1) has full row rank (over [ξ, ξ−1].

    In the P-periodic case, we adopt the definition of minimality from the time invariant case.

    DEFINITION 3.4 [ALE 05] A representation matrix R ∈ g×q [ξ, ξ−1] of a P-periodic system Σ=( , q, ) is said to be a minimal representation if for any other representation R′ ∈ g×q [ξ, ξ−1] of Σ, there holds g g′.

    It is not difficult to check that a representation R (ξ, ξ−1) of a P-periodic system Σ is minimal if and only if the same is true for the corresponding representation RL (ξ, ξ−1) of the associated time-invariant lifted system ΣL. Thus R (ξ, ξ−1) is minimal if and only if RL (ξ, ξ−1) is full row rank over [ξ, ξ−1]. The next lemma translates this in terms of the matrix R (ξ, ξ−1) itself.

    LEMMA 3.5 [ALE 05] Let R (ξ, ξ−1) ∈ g×q [ξ, ξ−1] be the representation matrix of a P-periodic system and consider the corresponding matrix RL (ξ, ξ−1) ∈ g×Pq [ξ, ξ−1] given by (4) and (6). Then, the following conditions are equivalent:

    (i) RL (ξ, ξ−1) has full row rank over [ξ, ξ−1];

    (ii) R (ξ, ξ−1) has full row rank over [ξP, ξ−P] (i.e., if r P, ξ−P) ∈ ¹×g [ξP, ξ−P] is such that r P, ξ−P) R (ξ, ξ−1) = 0 ∈ ¹×q [ξ, ξ−1], then r P, ξ−P) =0).

    Together with the previous considerations, this result yields the following characterization of minimality.

    THEOREM 3.6 [ALE 05] Let R (ξ, ξ−1) ∈ g×q [ξ, ξ−1] be the representation matrix of a P-periodic system. Then R (ξ, ξ−1) is a minimal representation if and only if it has full row rank over [ξP, ξ−P].

    4. P-periodic image representations – PPIR

    Image representations constitute an alternative system description in the timeinvariant case. As a generalization of such representations we introduce here P-periodic image representations (PPIR).

    DEFINITION 4.1 A behavior is said to have a PPIR if it can be described by equations of the form:

    (8)

    where w ∈ ( q) is the system variable and v is an auxiliary variable taking values in ℓ, ℓ ∈ .

    Notice that (8) can be written as

    with

    we refer to this matrix as a PPIR matrix.

    Consequently

    where ML is such that

    (9)

    Therefore, if is a P-periodic behavior with PPIR matrix M, L is a time-invariant behavior with image representation ML. It turns out that the converse also holds true, yielding the following result.

    THEOREM 4.2 A P-periodic behavior ⊂ ( q) has a PPIR if and only if the associated lifted behavior L has an image representation.

    5. Controllability

    Loosely speaking, a behavior is said to be controllable if the past of every trajectory in can be concatenated with the future of an arbitrary trajectory in this behavior. More concretely,

    DEFINITION 5.1 [POL 98] A behavior is said to be controllable if for all w1, w2 ∈ and k0 ∈ , there exists k1 ≥ 0 and w ∈ such that w (k) = w1 (k), for k k0, and w (k) = w2 (k), for k > k0 + k1.

    As stated in the next theorem, the controllability of a P-periodic behavior is equivalent to the controllability of its associated lifted system.

    THEOREM 5.2 [ALE 05] A P-periodic behavior is controllable if and only if the corresponding lifted behavior L is controllable.

    From Theorem 5.2, together with the characterization of behavioral controllability given in [WIL 91, Theorem V.2], it is possible to characterize the controllability of P-periodic systems.

    PROPOSITION 5.3 [ALE 05] Let Σ= ( , q, ) be a P-periodic system, represented by (1), with representation matrix R as in (3). Then Σ is controllable if and only if the corresponding matrix RL (see (4) and (6)) is such that RL (λ, λ−1) has constant rank over \{0}.

    In case the matrix RL (ξ, ξ−1) ∈ g×Pq [ξ, ξ−1] has full row rank, the condition that RL (λ, λ−1) has constant rank over \ {0} is equivalent to say that RL (ξ, ξ−1) is left-prime, i.e, all its left divisors are unimodular matrices in g×g [ξ, ξ−1]. It turns out that the left-primeness of RL (ξ, ξ−1) can be related with the following primeness property for R (ξ, ξ−1).

    DEFINITION 5.4 [ALE 05] A Laurent-polynomial matrix R (ξ, ξ−1) ∈ g×q [ξ, ξ−1] with full row rank over [ξP, ξ−P] is said to be left-prime over [ξP, ξ−P], or simply P-left-prime, if whenever it is factored as

    with D P, (ξ−P) ∈ g×g [ξP, ξ−P], then the factor D P, ξ−P) and, equivalently, D (ξ, ξ−1), are unimodular (over [ξP, ξ−P] and [ξ, ξ−1], respectively).

    LEMMA 5.5 [ALE 05] Let P ∈ and R (ξ, ξ−1) ∈ g×q [ξ, ξ−1] have full row rank over [ξP, ξ−P]. Consider the associated matrix RL (ξ, ξ−1) ∈ g×Pq [ξ, ξ−1] according to the decomposition (4). Then, the following conditions are equivalent:

    (i) RL (ξ, ξ−1) is left-prime;

    (ii) R (ξ, ξ−1) is P-left-prime.

    This leads to the following direct characterization of controllability.

    THEOREM 5.6 [ALE 05] A P-periodic system Σ = ( , q, ) with PPKR is controllable if and only if its minimal PPKR matrices R (ξ, ξ−1) are P-left-prime.

    Since, for time-invariant behaviors, there is an equivalence between behavioral controllability and the existence of image representations (see [POL 98]), Theorem 4.2, together with Theorem 5.2, makes it possible to prove the following result.

    THEOREM 5.7 A P-periodic behavior has a PPIR if and only if it is controllable.

    Combining Theorems 5.6 and 5.7 we can state that:

    THEOREM 5.8 Let Σ = ( , q, ) be a P-periodic system with PPKR. Then the following are equivalent:

    (i) is controllable;

    (ii) all the minimal PPKR of are P-left-prime;

    (iii) has a PPIR.

    6. Autonomicity

    Autonomicity is the opposite of controllability. Indeed, whereas in a controllable behavior the future of a trajectory is independent of its past, in an autonomous behavior every trajectory is uniquely determined by its past.

    DEFINITION 6.1 [WIL 91] A behavior is said to be autonomous if for all k0 ∈ and all w1, w2 ∈

    Similar to what is the case with controllability, the autonomicity of and of L are one-to-one related.

    THEOREM 6.2 [KUI 97] Let Σ = ( , q, ) be a P-periodic system. Then is autonomous if and only if L is autonomous.

    Taking into account the characterization of autonomicity for time-invariant behaviors given in [POL 98], the following result is trivially obtained.

    COROLLARY 6.3 Let Σ = ( , q, ) be a P-periodic system with a PPKR and a representation matrix R. Then is autonomous if and only if the corresponding representation matrix of the associated lifted system, RL, has full column rank (fcr).

    7. Free variables

    Given a behavior ⊂ ( q) , a component wi of the system variable w is said to be free if for all α ∈ there exist a trajectory w* ∈ such that (k) = α (k), k ∈ . This means that wi is not restricted by the system laws.

    The existence or absence of free variables is related, in the time-invariant case, to properties as controllability and autonomicity: a non-trivial time-invariant controllable behavior must have free variables; on the other hand the absence of free variables is equivalent to autonomicity, [POL 98]. As the next examples show, this no longer holds in the P-periodic case.

    Example 1 Consider the 2-periodic behavior with PPKR

    i.e., described by

    Since

    its associated lifted behavior L is described by the kernel representation

    where

    It is also possible to describe this lifted behavior in terms of an image representation, namely

    In order to achieve the decomposition (9) we use the fact that L can also be given as

    with ML (ξ, ξ−1) ∈ ²×²ℓ [ξ, ξ−1], such that

    Therefore the original 2-periodic behavior has a PPIR matrix M given by

    that is, the 2-periodic behavior allows the PPIR

    (10)

    and is consequently controllable. However has no free variables, since the values of w on each even time instant and its consecutive one must coincide.

    Example 2 Let ⊂ be the 2-periodic behavior described by w (2k) = 0, k ∈ . Clearly the only system variable w is not free, since it is required to be zero on even time instants. However, is not autonomous. Indeed fixing the values of w (k) for k ≤ 0 does not yield a unique trajectory, since the values of w (2k + 1), k > 0 can still be chosen freely. Thus the absence of free variables does not imply autonomicity.

    The analysis of these examples suggests that a different notion of free variable should be considered in the P-periodic case.

    DEFINITION 7.1 Let ⊂ ( q) be a behavior in q variables. The ith system variable wi, i ∈ {1, … , q}, is said to be P-periodically free with offset t or t-P-periodically free, for t = 1, …, P, if wi (Pk + t), k ∈ , is not restricted by the behavior. More precisely, if for all α ∈ , there exists w* ∈ such that its ith-component satisfies

    Moreover, wi is said to be P-periodically free if it is P-periodically free with offset t for some t = 1, …, P.

    This definition yields the usual notion of free variable for time-invariant behaviors, if one regards time-invariance as 1-periodicity.

    As a direct consequence of this definition we can state the following result.

    PROPOSITION 7.2 Given a P-periodic behavior ⊂ ( q) , the ith system variable wi, i ∈ {1, … , q}, is t-P-periodically free (in ) if and only if (Lw) (t−1)q+i is free in L .

    Now, a controllable P-periodic behavior must have P-periodically free variables.

    Example 3 As we have seen, the variable w in Example 1 is not free. However, this variable is 2-periodically free. Recall that the associated lifted behavior L is described by

    or, equivalently,

    showing that either 1 or 2 are free in L . Thus w is 2-periodically free since it is 2-periodically free with offsets t = 1 or t = 2.

    Moreover, the following characterization of autonomicity in terms of P-periodically free variables holds.

    THEOREM 7.3 Let Σ = ( , q, ) be a P-periodic system. Then is autonomous if and only if has no P-periodically free variables.

    Example 4 As we have seen, although behavior in Example 2 is not autonomous, the system variable w is not free. Notice that however w is 2-periodically free since in this case we have

    which leads to

    Therefore the associated lifted behavior L is described by

    equivalently

    or still

    Thus 1 is free and w is 2-periodically free since it is 2-periodically free with offset t = 1.

    The notion of P-periodic freeness plays an important role in the definition of input/ output structures for periodic behaviors. This implies considering simultaneously free components in the system variable. However, in a P-periodic behavior, one has to take into account that such components may be P-periodically free with different offsets. This is illustrated in the following example.

    Example 5 Let ⊂ ( ²) be the 3-periodic behavior given by the equations

    Clearly the values of w1 (3k + 1), w1 (3k + 2) and w2 (3k + 3) (k ∈ ) are free, i.e., w1 is 3-periodically free with offsets 1 and 2, and w2 is 3-periodically free with offset 3. Note further, that none of the variables is free at all the possible offsets t = 1, 2, 3.

    This can be put in a more compact form by saying that (w1, w1, w2) is (1, 2, 3)-3- periodically free. Note that, in this case, the freeness in the system cannot be assigned to one of the two system variables alone. Therefore, neither w1 nor w2 can be taken as an input, in the classical, time-invariant sense. This suggests to use an alternative approach.

    Using the operator ΩP,q introduced in section 3, we have

    Thus

    where the sub-indices correspond to the components of Ω3,2 (σ)w. Now

    is a free set of variables of Ω3,2 (σ)w, since u (3k) can be chosen freely for all k ∈ , i.e., given α ∈ ( ³) , there exists w* ∈ , such that

    Moreover, u is a maximally free set of variables, in the sense that once u is fixed (say, u (3k) = 0, k ∈ ) no other free components are left in Ω3,2 (σ)w. Therefore, we call u a P-periodic input of . The complementary components of Ω3,2 (σ)w,

    constitute the corresponding P-periodic output.

    In the general case, given a P-periodic behavior with variable w, a choice of (possibly repeated) components of w, (wi1 · · ·wim)T , ir ∈ {1, … , q} for r = 1, …,m, is said to be (t1, … , tm)-P-periodically free, tr ∈ {1, …, P} for r = 1, …,m, if for all αr ∈ , there exists w* ∈ such that its irth-component satisfies

    Note that (wi1 · · ·wim)T is (t1, … ,tm)-P-periodically free if and only if

    is a free set of variables of ΩP,q (σ)w, with ΩP,q (ξ) defined as in (5).

    DEFINITION 7.4 Given a P-periodic behavior ⊂ ( q) with variable w = (w1 · · ·wq)T , a choice of components

    of ΩP,q (σ)w is said to be a P-periodic input of if u is a maximally free set of variables of ΩP,q (σ)w in the following sense:

    (i) u is free, i.e., ∀α ∈ ( m) ∃w* ∈ s.t.

    (ii) The set of trajectories

    has no free variables.

    A choice of components y of ΩP,q (σ)w is said to be a P-periodic output of if (u, y) is a partition of the components of ΩP,q (σ)w. Finally, an input-output structure for is defined as a partition (u, y) of the components of ΩP,q (σ)w, such that u is an input and y is an output.

    Since

    it is obvious that:

    PROPOSITION 7.5 = (Lw)1 , … , (Lw)𠄓m) is an input of the time-invariant behavior L if and only if u = (ΩP,q (σ)w)1 , … , (ΩP,q (σ)w)ℓm) is a P-periodic input of .

    Taking into account the relationship between the P-periodically free variables of a P-periodic behavior and the free variables of its associated lifted system, it is now possible to define input/output structures in the periodic case based on the available results for time-invariant systems. This leads to the following theorem.

    THEOREM 7.6 Every P-periodic behavior admits an input/output structure.

    Example 6 Consider the 3-periodic behavior with PPKR matrix

    Its associated lifted system has also a kernel representation, that is,

    with

    Letting

    the lifted system can be represented as

    Since det P (ξ, ξ−1) = 0, := [Lw1 Lw6]T is an input in L and, consequently

    is a 3-periodic input for .

    8. Conclusions

    In the sequel of the work carried out in [KUI 97] and [ALE 05], we have considered P-periodic systems within the framework of the behavioral approach. We analyzed some properties of P-periodic kernel representations, such as equivalence and minimality. Moreover, at the level of system theoretic properties, we have obtained further results on controllability and autonomicity. We defined a new type of representations, P-periodic image representations (PPIR), that generalize time-invariant image representations. Further, we have introduced a new concept of P-periodic free variables and analyzed the relationship between the existence of such variables and controllability and autonomicity. Related to our notion of freeness, we defined the concept of P-periodic input, as well as input/output structures in P-periodic systems. In our opinion, these preliminary results will play an important role in other contexts, such as for instance the study of control problems for P-periodic behaviors.

    8. Conclusions

    This research was partially supported by the European Community Marie Curie Fellowship in the framework of the Control Training Site programme (grant no. HPMTGH- 01-00278-98) during the first author’s stay at the Department of Applied Mathematics, University of Twente, The Netherlands, as well as by the Unidade de Investigação Matemática e Aplicações-UIMA, University of Aveiro, Portugal, through the Programa Operacional Ciência, Tecnologia e Inovação-POCTI of the Fundação para a Ciência e Tecnologia-FCT, co-financed by the European Union fund FEDER.

    10. References10s

    [ALE 05] ALEIXO J., POLDERMAN J., ROCHA P., Further results on periodically timevarying behavioral systems, Proceedings of the 44th IEEE Conference on Decision & Control, and the European Control Conference - CDC-ECC’05, Seville, Spain, 2005, p. 808–813.

    [KUI 97] KUIJPER M., WILLEMS J., A behavioral framework for periodically time-varying systems, Proceedings of the IEEE 36th Conference on Decision & Control, vol. 3, San Diego, California USA, 1997, p. 2013–2016.

    [POL 98] POLDERMAN J.W.,WILLEMS J. C., Introduction to Mathematical Systems Theory: A Behavioral Approach, vol. 26 of Texts in Applied Mathematics, Springer-Verlag, New York, 1998

    [WIL 91] WILLEMS J., Paradigms and Puzzles in the Theory of Dynamical Systems, IEEE Transactions on Automatic Control, vol. 36, num. 3, 1991, p. 259–294.


    Iteratively Improving Moving Horizon Observers for Repetitive Processes

    Ignacio Alvarado¹, Rolf Findeisen, Peter Kühl, Frank Allgöwer and Daniel Limón

    University of Stuttgart, 70550 Stuttgart, Germany

    University of Seville, Seville 41092 , Spain

    (alvarado,limon)@cartuja.us.es, (findeisen,kuhl,allgower)@ist.uni-stuttgart.de


    ABSTRACT. This paper considers the problem of state estimation for repetitive nonlinear systems. Taking the repetitive nature of the process into account a new state estimation scheme is proposed, which from repetition to repetition iteratively improves the estimate. The scheme combines ideas from iterative learning control and moving horizon state estimation. The state estimate during every repetition is based on approximately minimizing the deviation between the measured and estimated output. Stability and iterative improvements of the state estimates are ensured by enforcing a sufficient contraction of the deviation between the measured and estimated output over the considered estimation window. As shown, under the contraction constraints the state estimation scheme ensures asymptotic convergence of the state estimation error in the nominal case, provided that the system satisfies an uniform reconstructability condition.

    KEYWORDS: Observers, Repetitive processes, Nonlinear Systems


    1. Introduction

    Many processes are inherently repetitive, i.e. the same process happens over and over again. Typical examples for repetitively operating processes are:

    – Industrial robot operations for welding, cutting, etc.

    – Batch processes in the chemical or the pharmaceutical industry.

    – Synchrotrons for particle acceleration.

    – Rail vehicles operated on a specific track over and over again.

    The industrial importance of these processes has lead to significant research interest with respect to the control, modeling and analysis over the past decades [LON 00, MOO 00, MOO 93].

    In comparison to the control of standard/non repetitive continuous time systems systems the control of repetitive processes offers several challenges. Most of these challenges arise from the fact that one has to consider two distinct time scales. The time elapsing in every repetition and the iteration or repetition number, which is naturally discrete valued. Due to the additional degree of freedom in the time, repetitive systems are sometimes also referred to as 2D systems or two degree of freedom systems.

    By now several results with respect to the control of repetitive/iterative systems have been established. We do not go into details here, see for example [LON 00, MOO 00, MOO 93].

    The promising results in the area of repetitive, iterative and learning control schemes for repetitive processes naturally lead to the question if the same algorithms can be extended to the state estimation problem for repetitive processes. State estimation for repetitive processes is of practical as well as theoretical interest, since the estimated state can be used for various purposes such as process monitoring and state feedback control. The interest in the state estimation problem is also driven by the fact that very often the development of new control methods has given birth to analogous observers schemes. Thus, the focus of this paper is on the state estimation problem for repetitive processes, for which, surprisingly, only limited results are available by now, see for example [TAY 03].

    In this paper we propose to combine ideas from iterative learning control with ideas from moving horizon observers [ZIM 94, MIC 95, ROB 95, ALA 99, RAO 03]. Specifically we outline an extension of the contraction based moving horizon state estimation scheme proposed in [MIC 95] for repetitive processes.

    In moving horizon state estimation the state estimate is obtained by (approximately) minimizing a cost functional, typically the deviation of the estimated and measured output, over a past measurement window. Stability of moving horizon estimation schemes for non repetitive continuous time systems is typically achieved by either employing an upper bound on the initial weight, the so called arrival cost [RAO 03], or by the application of a contraction constraint

    Enjoying the preview?
    Page 1 of 1