SciELO - Scientific Electronic Library Online

 
vol.36 número2Centralized formation control of non-holonomic mobile robots índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

  • Não possue artigos citadosCitado por SciELO

Links relacionados

  • Não possue artigos similaresSimilares em SciELO

Compartilhar


Latin American applied research

versão impressa ISSN 0327-0793

Lat. Am. appl. res. v.36 n.2 Bahía Blanca abr./jun. 2006

 

An extended MPC convergence condition

A. H. González1 and J. L. Marchetti2

1 Instituto de Desarrollo Tecnológico para la Industria Química, INTEC (UNL - CONICET)
alejgon@ceride.gov.ar

2 jlmarch@ceride.gov.ar

Abstract — Nominal convergence of Constrained Model Predictive Control has been extensively analyzed in the last fifteen years. The inclusion of a terminal constraint into the optimization problem and the expansion of the prediction horizon up to infinity are the main strategies already proposed in order to achieve the desired stability. However, when a model is used in which the inputs are in the incremental form, these strategies tend to be infeasible. This paper extends the contracting constraint idea by including a simple-to-apply and less restrictive new set of constraints into the optimization problem, to allow nominal convergence.

Keywords — Predictive Control. State Space Model. Receding Horizon. Pseudo Cost Function. Infinite Constraint.

I. INTRODUCTION

The Receding Horizon idea uses an on-line optimization that updates the manipulated variable at each sample time. In the tracking problem, the difference between the predicted future outputs and the set point is the cost function of a minimization, and nominal stability reduces itself to ensure the convergence of the successive optimal costs. Since consecutive optimization problems are in essence different, it is not simple to compare two consecutive cost functions (the prediction horizon recedes in time so the successive cost functions differ from each other in their location). When an infinite horizon (IHMPC) is used, the end of the consecutive horizons does not vary while the beginning increases. Making use of the Bellman's principle of optimality, which states that the tail of any optimal trajectory is itself the optimal trajectory from its started point, the convergence can be guaranteed. See Maciejowski (2000), pp 191-198. However, when an incremental form of a model is used, the effect of the integrating modes at the end of the control horizon must be zeroed in order to make the infinite open-loop cost bounded (Rodrigues and Odloak, 2003). Similar to the case of terminal constraints, this problem tends to be infeasible and slack variables must be added (Rodriguez and Odloak 2003, Odloak 2004).

Following the strategy developed by González, et al. (2004), this paper extends the idea of including a set of contracting constraints to achieve output convergence. In the mentioned work, a preliminary study of convergence conditions that is different from the classical method has been made. However, convergence could not be properly proved. Now, two improvements are made: the convergence of the method is proved, and the whole formulation is translated into a State Space Model in order to take advantage of its well-known benefits.

II. BASIC FORMULATION OF MPC

Consider a system with nu inputs and ny outputs and consider an optimization cost function as the sum of the future errors inside the prediction horizon plus a manipulated variable penalization, namely

(1)

where

e(k + i/k) = y(k + i/k)
Δuk = [Δu(k), Δu(k + 1), ..., Δu(k + m - 1)]T
(2)

and is the predicted output, are the manipulated variable increments, p is the prediction horizon, m is the control horizon, and , , are positive definite weighting matrices, and r is the setpoint value.

If Δu* is the optimal input increment vector, then

(3)

is the optimal cost function value at time k. In the same way, the optimal cost function value at time k+1 will be

(4)

Now, following the idea used by Rawling and Muske (1993), an auxiliary pseudo cost function at time k+1 is defined using the optimal values of the input changes calculated at time k:

(5)

where is the error at time k + i, calculated at time k+1, using , and1

(6)

III. STATE SPACE MODEL

The initial model has the form2:

(7)

where and .

Now the component errors e(k + i/k + j) are written as functions of Δu in the case that state space models are used.

(8)

where:

In the same way, at time k + 1, the outputs are given by

where the input u*(k) = u(k - 1) + Δu*(k) was calculated with the last optimal value Δu*(k/k) computed at time k. This corresponds to the receding horizon idea of applying only the first calculated element of each optimization problem. Then, a new optimization problem is solved at the next time.

It can be proved that, for the undisturbed system, if is used instead of , then

(9)

therefore, the difference of two successive cost function values and is given by

(10)

where .

IV. THE NEW INFINITE CONTRACTING CONSTRAINT

An inequality constraint that forces the cost function to be non-increasing is given by3

or, taking into account that R is positive definite4,

(11)

In order to make the dependence of the last errors on optimization variables explicit, it is useful to express them in terms of Δu:

(12)
(13)

where row1 and rowp are the first and the last block of matrix As, A1 and A2:

(14)

and the hat means that the variables are optimization variables.

Another way to find a general expression for the successive contracting constraint is by "moving" the rows through the matrix instead of the updating input and the state vector. That is,

(15)

where is the vector of optimization variables and both, x(k) and u(k - 1), are feedback measurements. Here, it can be seen that all the necessary information to write the inequality constraints is in matrices A, B and C. Note that in the last expression, matrices As, A1 and A2 were extended in time (j rows were added to the end of the original definition in (8)).

Then, the optimization problem that includes the contracting constraint can be written as:

(16)

Finally, it must be shown that, if is a feasible solution of the optimization problem at time k+1, then:

(17)

Thus, the (global) optimum obtained in (16) will not be worse than the one obtained from this particular solution.

But the required feasibility is not easy to obtain and, as shown in González, et al. (2004), it demands an infinite series of (forward) constraints of the following form:

(18)

where are the successive pseudo cost functions; that is, the pseudo cost functions obtained by the application of , where

The complete MPC optimization problem is then:

P1:

(19)

V. DEALING WITH AN INFINITY NUMBER OF CONSTRAINTS

In the formulation presented above, the number of constraints is infinite, but considering that the open-loop system is stable, only a finite number of constraints must be added since the plant, with null input moves (from k+m on), converges to a finite value5.

Since input increments are null from k+m on, the constraints at a generic time k+m+j take the form

(20)

which is an expression in terms of and only.

Now, when , we have

, (21)

since matrix A is stable (has all its eigenvalues lower than unity), and then A is a null matrix. This means that at time j approaching infinity, all the constraints are satisfied.

Note that to satisfy the last equality constraint, it is not necessary that

(22)

where the matrix is clearly the gain of the plant and u*(k + m - 1) is the last input injected in the open-loop prediction.

It is important to notice that, in spite of a potential large number of contracting constraints, this strategy is less restrictive than others, which need the accomplishment of (22). On the other hand, the weakness of this strategy is that the minimum number of constraints to ensure the overall convergence (in case to exist) remains to be determined.

In order to strengthen that the infinite number of constraint present in (19) still holds when a finite number of them is used, it is possible to require that the last input in the control horizon be such that the steady state value of the open-loop prediction be no greater than the set point. This condition added to a number of contracting constraints covering the complete dynamic order of the plant would help to the accomplishment of the constraints for all future time. Note that this strategy may be a bit conservative, but it is always feasible.

VI. CONVERGENCE OF THE METHOD

Now, a matrix condition must be derived to guarantee that both, the output error and the input increments converge to zero.

Suppose that the proposed optimization problem has a feasible solution at each sampling time, that is, a Δu(k/k) move exists that minimizes the cost function. The infinite contracting constraint ensures that the cost is non-increasing,

(23)

Then, because of its positive (quadratic) condition, the series of cost functions is bounded below by zero and therefore converges. From (23) both, the term and the term

(24)

converge to zero for large k. From the last one, two alternatives arise:

(25)

for large k, or

(26)

where ec represents a predicted output offset.

Note that the condition

(27)

for large k (which imply, in turn, that u(k/k) is also bounded), added to the open-loop stability hypothesis, guarantees that the output error converges to a bounded value.

Theorem 1

If the system to be controlled is stable and problem P1 is feasible at each time step k, then, there exist matrices Q and R, and horizon p, such that the offset ec converges to zero as k tends to infinity.

Proof:

Suppose that when - large enough- the output error tends to ec ≠ 0. Then, taking into account that the input increments necessarily converge to zero, the corresponding cost (considering a SISO case and m=1 for simplicity) is

(28)

where

and represents the vector of future errors in absence of new input increments.

In this way, the (no optimal) input increment that guides the system to the set points (in steady state) is given by

(29)

where G is the plant gain, and r is the set point. With this input increment (which may not be the optimal) the corresponding cost is given by6

(30)

Note that because it represents the future error in absence of new input increments. Thus, the condition that guarantees the convergence of the error to zero is

(31)

Now, let's remark de following fact: equation (31) can be expressed as

where Af is

If matrices and are adopted such that

then, as p increases, the last element of vector Af tends to G, and the last part of the term U - AfG-1 tends to zero. Therefore, since the left-hand side increases with p while the right hand side tends to a finite value, a prediction horizon p does exist that guarantees (31). Then, matrices Q and R, and horizon p exist that guarantee the convergence of the error to zero, and the theorem is proved.

To summarize, if (31) is true and ec is different from zero, then, in the open-loop optimization problem there are no reasons to prevent Δu(k/k) from moving to a non null value in order to eliminate ec. Therefore, the null condition of Δuk (which, in turn, is given by the convergence of the cost function) implies, indirectly (by means of the model), that ec is null. In other words, there are no reasons, out of infeasibility, to impede the closed-loop optimization to find a value of u (uss) that takes the output to its desired value r. Recall that this value of u is computed taking into account the output feedback and then, it will be accurate to the actual plant.

VII. SIMULATION RESULTS

A numerical example taken from Marchetti, et al., (1986) is used to illustrate some results; this is a second order system with a right half plane zero:

Figure 1 shows the closed-loop response of the second order system for a step change in the set point when the constraints, defined in (19), are not included in the control problem. On the other hand, Fig. 2 shows the indexes , and , which correspond to this case. Due to the small output horizon that was adopted in this case, the closed-loop system becomes unstable. It can be seen that the index values violate in this case the proposed constraint.


Figure 1: Output response without infinite constraint.


Figure 2: Indexes without infinite constraint. Only three successive indexes are shown.

Figures 3 and 4 show the step response and the indexes when the infinite constraint is used. It can be seen that including the constraints defined in (19) turns the closed-loop system stable. The parameter values of the MPC for both cases (with and without infinite constraint) are shown in Table 1.


Figure 3: Output response using infinite constraint.


Figure 4: Indexes using infinite constraint. Only three successive indexes are shown.

Table 1. Parameter values used in the Constrained MPC controller (in both cases: with and without infinite constraint)

VIII. CONCLUSIONS

A new MPC convergence condition is proposed when State Space Models are used. The strategy consists of the inclusion of a set of constraints that forces the MPC cost function to decrease, plus an appropriate selection of cost weighting matrices. The main contribution of this paper consists of the development of a detailed proof of convergence of the successive optimizing solutions, and a simple-to-apply relationship among the weighting matrices, which allows the output offset elimination.

Even though the theoretical number of contracting constraints should be infinity, the simulation results show that, when the controlled plant is open-loop stable, only a finite number of them are necessary to accomplish the desired convergence. However, the precise finite number of constraints that guarantees the convergence and would work as a sufficient condition was not determined. This problem and the robustness of the proposed procedure in the presence of model uncertainties are considered matter of future works.


1 The form of the pseudo cost obeys to the fact that, in the infinite horizon case, if no new control increments is introduced at k+1, then the optimization remains exactly the same from time k to time k+1, except for the starting point.
2 Note that the initial model is written in term of "u".
3 The formulation presented here is the same used in Maciejowky 2000.
4 Note that the constraint c1 in (11) is more conservative than the former.
5 Note that, as opposite to the infinite horizon case, the requirement is not u(k + i)= uss = r/ass for im, but Δu(k + i) = 0 for im (control horizon hypothesis). In the infinite horizon case, this requirement is only stated to make possible the computation of the infinite cost function.
6 It is supposed that the on-line optimization problem is always feasible, and then, the steady state using the proposed input increment exhibits a null cost .

REFERENCES
1. González, A.H. Odloak D. and Marchetti J.L. An MPC Stability Condition using Step Response Model, AADECA (2004). CD Version.
2. Maciejowski, J.M. Predictive Control with Constraints, England: Prentice Hall (2000).
3. Marchetti, J. L., Mellichamp, D. A. and Seborg, D. E. Predictive Control Based on Discrete Convolution Models, Ind. Eng. Chem. Process Des. Dev. 22 (3), 488-495 (1983).
4. Odloak D. Extended Robust Model Predictive Control. AIChE Journal. 50 (8), 1824-1836 (2004).
5. Rawling, J.B., & Muske, K.R. The Stability of Constrained Receding Horizon Control. IEEE Transaction on Automatic Control 38, 1512 (1993).
6. Rodrigues, M.A., Odloak, D. MPC for Stable Linear Systems with Model Uncertainty. Automatica 39, 569-583 (2003).
        [ Links ]         [ Links ]         [ Links ]         [ Links ]         [ Links ]         [ Links ]

Received: September 21, 2005.
Accepted for publication: February 6, 2006.
Recommended by Guest Editors C. De Angelo, J. Figueroa, G. García and J. Solsona.

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons