## Servicios Personalizados

## Articulo

## Indicadores

- Citado por SciELO

## Links relacionados

- Citado por Google
- Similares en SciELO
- Similares en Google

## Bookmark

## Latin American applied research

*versión impresa* ISSN 0327-0793

### Lat. Am. appl. res. v.33 n.3 Bahía Blanca jul./sept. 2003

**Use of back-off computation in multilevel MPC**

**M. J. Arbiza, J. A. Bandoni and J. L. Figueroa ^{1}**

*Planta Piloto de Ingeniería Química (PLAPIQUI-UNS-CONICET), Camino la Carrindanga Km. 7, 8000, Bahía Blanca, ARGENTINA*

^{1} *Also with Dto. de Ing. Eléctrica y Computadoras, Univ. Nac. del Sur, Argentina figueroa@uns.edu.ar*

*Abstract* ¾ The desired operating point in Model Predictive Control is determined by a local steady-state optimization, which may be based on an eco-nomic objective. In this paper we proposes the solu-tion of a linear dynamic back-off problem to obtain a hierarchical scheme that ensures feasible operation in despite of disturbances. This is performed by computing the critical disturbances and expanding the optimization problem to ensure the existence of a control action that ensures the rejection of each perturbation.

*Keywords* ¾ Model Predictive Control. Process Optimization.

**I. INTRODUCTION**

Model Predictive Control (MPC) refers to a class of computer implemented mathematical algorithms that control the future behavior of a plant through the use of an explicit process model. At each control interval the MPC algorithm computes in an open-loop mode a se-quence of adjustments on manipulated variables, in or-der to optimize the future plant behavior under process constraints. The first input in the optimal sequence is injected into the plant, and the entire optimization is repeated at subsequent control intervals. In the modern processing plants the MPC controller is part of a multi-level hierarchy of control functions (Qin and Badgwell, 1997), as it is illustrated in Fig. 1. Several other authors (Richalet *et al.*, 1978; Prett and Garcia, 1988) have de-scribed similar hierarchical structures.

The second stage of this hierarchy (the unit opti-mizer) computes an optimal steady-state point and passes it to the dynamic constraint control for its im-plementation. This desired operating point is usually determined by a local steady-state optimization, which may be based on an economic objective and a linear model. Typically, the resulting point lies at the bound-ary of the operative region (i.e., it is at the intersection of several active constraints, as many as the number of optimization variables). The underlying idea is that the controller provides *perfect control*, so that the plant re-mains at, or at least close to, its nominal operating point in spite of disturbances, parameter variations and un-certainty in the plant characteristics. This is a clearly unrealistic scenario, given that in a practical situation a plant cannot be operated at its nominal optimum. A pos-sible solution to overcome this practical limitation is to take a safety margin by strengthening the constraints (i.e., by reducing the feasibility region), and moving the desired operating point away from the actual plant con-straints. In absence of information about how distur-bances affect the steady-state point, this over design is hard to justify on economical grounds.

In this paper we present an alternative procedure to compute the operating point that guarantee feasible op-eration in spite of process disturbances. The main idea is to move the operating point away from the boundary of the feasibility region by considering the effect that the expected disturbances will have on the plant operation. This movement is referred in the literature as *back-off*. It was originally motivated by the desire of evaluating and comparing control strategies and process designs on the basis of their economic impact (Bandoni *et al.*, 1994, Perkins and Walsh, 1994; Figueroa *et al.*, 1994).

In general terms, the back-off problem consists in the optimization of a steady state objective function subject to dynamic constraints in the presence of proc-ess disturbances. Through this procedure, we ensure that the process operates at the optimal level of the defined performance objective function, with no constraint vio-lations at the control level. In practice, the back-off problem is usually solved by finding an operative point that guarantee plant operation for the "worst case" of the disturbances, in the sense they produce the largest constraint violation.

**Fig. 1**. Hierarchy of Control System.

A strategy to compute the nonlinear steady-state back-off was developed by Bandoni *et al.* (1994) by writing the optimization problem as one of semi-infinite programming. This algorithm was extended to the dy-namic case by Figueroa *et al.* (1996). Due to the large computational effort necessary to solve the nonlinear optimization problem some algorithms were proposed by Figueroa and Desages (1998) and Raspanti and Fi-gueroa (2001) by approximating the model using Piecewise lineal models. Loeblein and Perkins (1999) proposed a methodology to evaluate the back off under unconstrained MPC regulatory control for a stochastic description of disturbances, to perform this analysis it is necessary to assumed the disturbance as Gaussian noise with known statistics.

The paper is organized as follows, in Section 2 the model predictive control formulation is described. The optimization structure of Level 2 is presented in Section 3. An application example is developed in Section 4 and the paper ends with some conclusions in Section 5.

**II. MODEL PREDICTIVE CONTROL FORMULATION**

In this paper, we will assume that the underlying system is the following discrete linear system,

(1) |

where **x** is the state vector, **y** is the measured output vector, **u _{s}** is the vector of optimization variables that determines the operating condition (computed in level 2),

**u**is the vector of manipulate variables and

_{c}**d**is the vector representing the disturbances. The domains of the signals are assumed as follow,

(2) |

Using this model structure, the control problem to be solved is to compute a sequence of inputs {**u _{c}**[

*k+l*],

*l=*1, ,

*M*), that will take the process from its current status

**x**[

*k*] to a desired steady-state

**x**. The MPC pro-blem is written as,

^{s}(3) |

where *P* is the output horizon, *M* is the control horizon, **z _{c}** is a set of

*n*inequality constraints,

_{c}**x**is the desired value for the state, D

^{s}**u**[

_{c}*k*] (

**u**[

_{c}*k*]-

**u**[

_{c}*k-*1]) is the move-ment in the manipulate variable and

**w**[

*k*] is a bias term that compares the current predicted state

**x**[

*k*] to the cur-rent measured state

**x**[

^{m}*k*] (i.e.,

**w**[

*k + l*] =

**x**

^{m}[

*k*] -

**x**[

*k*]). for

*l=*1,2, ,

*P*).

At each iteration, the measure of the actual process state is feedback when **w**[*k+l*] is computed to be used at the next sample time. In the solution of this problem it is usual consider no disturbances along the control hori-zon (*i.e.* **d**[*k+j*]=0, *j=1,M*). It is possible to include in the vector **z _{c}** some constraints that ensures closed loop stability for the control law (de Olivera and Morari, 2000).

**II. DYNAMIC OPTIMIZATION FOR LEVEL 2.**

Usually, as it is mentioned above, the desired operating point is determined by a local steady-state optimization for the undisturbed system (*i.e*, **d**[*k*]=0) with not control action applied (*i.e*, D **u _{c}**[

*k*]=0). This optimization may be based on an economic objective. Mathematically, this problem is written as,

(4) |

Typically, the resulting point lies in the boundary of the operating region (i.e., it is at the intersection as many active constraints as the dimension of the optimi-zation variables). The underlying idea is that the con-troller provides *perfect control*, so that the plant remains at its nominal operating point in spite of disturbance.

In this paper we suggest an alternative to compute the operating point. The main idea is to move the oper-ating point away from the boundary of the feasibility region by considering the effect that the expected dis-turbances will have in the operation of the plant. This is called *back-off* and it is motivated by the desire of evaluating and comparing control strategies and process designs on the basis of their economic impact (Figueroa, *et al.*, 1994).

In general the Back-off problem is defined as the optimization of a steady state objective function subject to dynamic constraints when disturbances are present. In this paper context, this is mathematically written as,

(5) |

where **x**[0] means the vector **x** at the steady-state (for the undisturbed system and without control action) and *cont*(**x**[*k*]) is an expression for a general controller. Note that in this problem there is a usual assumption that the control algorithm is implicit in the dynamic model. The objective function has an economic mean-ing and it is computed at steady state. In our case, it is quadratic, because in this way it is possible to represent the economic cost for process operation with lower mathematical complexity. The set of possible distur-bances is constrained to be of bounded amplitude. Fi-nally, the initial condition for disturbances and control action are considered equal to zero (*i.e*, **d**[*k*]=0 and D **u _{c}**[

*k*]=0)..

The objective function is evaluated at the initial time, considering that the plant is in steady state and free of disturbances (

**x**[

*k*+ 1] =

**x**[

*k*],

**d**[

*k*] = 0 and

**u**

_{c}[

*k*] = 0). Let us consider that (

**I-A**)

^{-1}exists (condition that is true for non-integrating process). Under these assumptions the steady-state vector is

**x**= (

**I**-

**A**)

^{-1}

**B**

_{s}

**u**

_{s}. This implies that the objective function could be written as,

(6) |

where

and

. |

Now, assuming the use of the MPC control structure defined in previous section, let us analyze the dynamic constraints. Starting from the valor of **x**[*k*] it is possible to solve recursively the dynamic model for an horizon of *P* future samples as,

*X*[*k*] = Q**x**(*k*) + G_{s}**u**_{s} + G_{c}**U*** _{c}*[

*k*] + G

_{d}**[**

*D**k*] (7)

with

, | , | , | |||

, | , | ||||

and | . |

Using this notation for the constraints, we obtain,

z[*k*] = Y* _{c}*Q

**x**[

*k*] + (Y

*G*

_{c}*+ W*

_{s}*)*

_{cs}**u**

_{s}+ (Y

*G*

_{c}*+ W*

_{c}*)*

_{cc}**U**

_{c}[

*k*] + (Y

*G*

_{c}*+ W*

_{d}*)*

_{cd}**D**[

*k*] + x

*£ 0*

_{c} where Y* _{c}* =

*diag*{

**C**

_{c}, ...,

**C**

_{c}}, W

*= [*

_{cs}**D**

^{T}_{cs}, ...,

**D**

^{T}_{cs}]

*, W*

^{T}*=*

_{cc}*diag*{

**D**

_{cc}, ...,

**D**

_{cc}}, W

*=*

_{cd}*diag*{

**D**

_{cd}, ...,

**D**

_{cd}} and x

*= [*

_{c}**E**

^{T}_{c}, ...,

**E**

^{T}_{c}]

*. Then,*

^{T} z[*k*] = X**x**[*k*] + X_{s}**u**_{s} + X_{c}**U**_{c}[*k*] + X_{d}* D*[

*k*] + x

*£ 0 (8)*

_{c} where X = Y* _{c}*Q, X

*= (Y*

_{s}*G*

_{c}*+ W*

_{s}*), X*

_{cs}*= (Y*

_{c}*G*

_{c}*+ W*

_{c}*) and X*

_{cc}*= (Y*

_{d}*G*

_{c}*+ W*

_{d}*).*

_{cd} In the solution of the back-off problem it is a usual practice to define the control algorithm and compute "the worst" disturbance in the sense of producing the largest violation of the constraints. In our case, since that we use a MPC scheme, we should consider "the worst" movement from the steady state due to distur-bance effect when "the best" control is applied. Then, we are interested in solving , where an optimization should be solved for each row (*j*) of the matrix [z[0]]* _{j}*. The domain of maximization corre-sponding to the disturbances moves between -1 and +1. Now, if we consider that this is a set of linear problems (one for each row of z[0]) and that the optimization variables (

**,**

*D***U**) are not related, this is equivalent to solve for each row .

_{c}The solution of the first term could be found independ-ently computing for each row, considering that the large value of each row will be obtain for the values of

**[0] that produces the largest contribution on [X**

*D**[0]]*

_{d}**D***. This coincides with the values of*

_{j}**[0]=± 1 correspond-ing with the sign of X**

*D**, i.e.,*

_{d}(9) |

Obviously, this vector defines "the worst case" at each instant and in each constraint. Now, in the present problem of compute back-off under MPC structure, we are interested in obtaining a value of **u _{s}** and the corre-sponding values of the control action

**U**[

_{c}*k*] in order to obtain the maximum of the steady-state objective func-tion without constraint violations. This is equivalent to solve

(10) |

In this problem, the constraints should be satisfied for all disturbances. This implies that at the operating point should exist "a control" that rejects "each disturbance". We can write this in mathematical terms as,

(11) |

where **x**[0] has been replace for their steady-state value. In the constraints in expression (11) it is implicit the presence of the "worst disturbances" in the sense of pro-duce the largest violation of the constraints at any time. Now, in operation, each of these disturbances will re-quire a particular control action to reject it. Let us de-fined the rows of z[0] associated with the *j ^{th}* constraint as, , with

*j*=1, ..,

*Pn*

_{c}^{1}(

^{1}This implies add a row for each sample time and for each constraint). Then, in the following we proposed to consider a particular control action (

**U**

^{j}_{c}[0]) to compensate each row. This is, we can write the problem of control exis-tence as,

for *j=1,
,Pn _{c}*; or, in a matricial form, as

* A_{op}u_{op} + b_{op}* £ 0 (12)

where

, |

and | . |

The first set of inequalities is included to force the proc-ess to verify the steady state equations. This problem could be solved as a standard Quadratic Problem as

(13) |

where we obtain a particular control, **U**^{j}_{c}[0], for each "worst case" disturbance. In next section, we will use this optimization formulation in the MPC formulation for an illustrative example.

**IV. EXAMPLE**

The case study considered in this section consists of two continuous stirred tank reactors (CSTR) in series, with an intermediate mixer introducing a second feed (de Hennin and Perkins, 1993; de Hennin, *et al*., 1994; Fi-gueroa, 2000), as shown in Fig. 2. A single irreversible, exothermic, first order reaction *A* ® *B* takes place in both reactors. The dynamic model of these reactions is

where some algebraic relations are defined as

, , , and

**Fig. 2.** Flowsheet Example

Table 1. Parameters and Variables

The process parameters and variables are defined in Table 1. Also in this Table are present the bounds for some variables. The state variables are the concentra-tion and temperatures in both reactors (**x** = [*C ^{1} T^{1} C^{2} T^{2}*]

*), the optimization variables are the first and second feed flowrates and cooling tem-perature for both reactors (), the manipulated variables are the cooling temperature for both reactors and the disturbances are the composition and the temperature in both feeds . The objective function for the optimization of the level 2 is to maximize the operation profit, expressed as,*

^{T}

There are the following constraints in this process:

*Security constraint:*

T_{1}£ 350 T_{2}£ 350;

*Production limitations:*

*Q _{F}^{1}*+

*Q*£ 0.8

_{M}*Q*³ 0.05

_{F}^{1}*Q*³ 0.05

_{M} *Process limitations:*

200£ *T ^{1}_{ci}*£ 300 200£

*T*£ 300

^{2}_{ci}200£

*T*£ 310 200£

^{1}_{co}*T*£ 310

^{2}_{co}*F*2

^{1}_{cw}£*F*£ 2

^{2}_{cw} *Product specifications: C _{2£ }*0.3.

The initial values for optimization and output vari-ables are the ones became from the global optimization of Level 1:

**u _{s}**=[0.2062 0.3352 250 250]

^{T} **x**=[0.1455 350 0.2105 332.1]* ^{T}*.

It is important to remark that the operation for these values is nominally (*i.e.* with not disturbances) feasible. When perturbation are presented this operating point becomes not feasible due to violation some constraints (*T ^{1}_{co}*>310 and

*T*>310).

^{2}_{co}Using the linearized model, the solution of problem (13) modifies this operative point to make its perma-nently feasible optimum for the set of possible distur-bances,

**u _{s}**=[0.53 0.27 252.13 294.27]

**x**=[0.356 342.32 0.197 336.83].

Figures 3-6 show the dynamic response of the process with the MPC control algorithm when step distur-bances in booth feed temperatures are applied. The showed plot represents the behavior of the Temperature in first and second reactor (Fig. 3 and 4, respectively) and for the input temperature in the cooler flow for both reactors (Fig. 5 and 6). In all cases the process variables do not exhibit constraint violation, so we can say that we have optimized and controlled the process success-fully.

**Fig. 3**. Temperature in first reactor

**Fig. 4**. Temperature in second reactor

**Fig. 5**. Cooler temperature in first reactor

**Fig. 6**. Cooler temperature in second reactor

**V. CONCLUSIONS**

In this paper the problem of determining the optimal operating point of a process under MPC control is pre-sented. In particular, the presence of the disturbance was considered to ensure not constraints violation in presence of perturbation. To obtain it, the local optimi-zation approach for level 2 in MPC is replaced by a back-off algorithm. This algorithm was modified in order to allow the incorporation of a MPC scheme by including in the optimization problem as many control sequences as the number of critical perturbations. The resulting scheme is applied to a flowsheet example.

**REFERENCES**

1. Bandoni, J.A., J.A. Romagnoli and G.W. Barton, "On optimising Control and the Effect of Disturbances: Calculations of the Open-Loop Back-offs", *Computers and Chem. Eng.* **18/S**, 505-509 (1994). [ Links ]

2. de Olivera, S.L. and M. Morari, "Contractive Model Predictive Control for Constrained Nonlinear Systems", *IEEE Trans. on Automatic Control* **45**, 1053-1070* *(2000). [ Links ]

3. de Hennin, S.R. and J.D. Perkins, "Structural decisions in on-line optimization", Technical Report B93-37, Imperial College, London (1993). [ Links ]

4. de Hennin, S., J.D. Perkins and G.W. Barton, "Structural decisions in on-line optimization", *Proc. PSE '94*, Korea., 297-302 (1994). [ Links ]

5. Figueroa, J.L., P.A. Bahri and J.A. Romagnoli, "Economic Impact of Disturbances in Chemical Processes- A Dynamic Analysis",* Interactions Between Design and Process Control,* Pergamon Press, Baltimore, 141-146 (1994). [ Links ]

6. Figueroa, J.L., P.A. Bahri, J.A. Bandoni and J.A. Romagnoli, "Economic Impact of Disturbances and Uncertain Parameters in Chemical Processes- A Dynamic Analysis", *Computers and Chem. Eng.* **20**, 453-461 (1996). [ Links ]

7. Figueroa, J.L. and A.C. Desages, "Use of Piecewise Linear Approximations for Steady-State Back-off Analysis", *Optimal Control: Applications and Methods* **19**, 93-110 (1998). [ Links ]

8. Figueroa, J.L., "Economic Performance of Variable Structure Control: A Case Study", *Computers and Chem. Eng.* **24**, 1821-1827 (2000). [ Links ]

9. Loeblein, C. and J.D. Perkins, "Structural Design for On-Line Process Optimization: I. Dynamic Economics of MPC", *AIChE Journal ***45**, 1018-1029 (1999). [ Links ]

10. Perkins, J.D. and S.P.K. Walsh, "Optimization as a Tool for Design/Control Integration", *Interactions Between Design and Process Control,* Pergamon Press, Baltimore, 1-10 (1994). [ Links ]

11. Prett, D. and C. Garcia* Fundamental Process Control*, Butterworth-Heinemann Series in Chemical Engng (1988). [ Links ]

12. Qin, S.J. and T.A. Badgwell, "An Overview of Industrial Model Predictive Control Technology", *Fifth International Conference on Chemical Process Control*, AIChE Symposium Series, Eds. Kantor, Garcia and Carnahan, **93**, 232-256 (1997). [ Links ]

13. Raspanti, C.G. and J.L Figueroa, "Use of a CPWL Approach for Operativity Analysis: A Case Study", *Latin American Applied Research* **31, **73-78 (2001). [ Links ]

14. Richalet, J., A. Rault, J.L. Testud and J. Papon, "Model predictive heuristic control: Applications to industrial processes", *Automatica* **14**, 413-428 (1978). [ Links ]