SciELO - Scientific Electronic Library Online

 
vol.33 issue3Modeling molecular weight distribution, vinyl content and branching in the reactive extrusion of high density polyethyleneInfinite dilution activity coefficients of solvents in fatty oil derivatives author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

  • Have no cited articlesCited by SciELO

Related links

  • Have no similar articlesSimilars in SciELO

Share


Latin American applied research

Print version ISSN 0327-0793

Lat. Am. appl. res. vol.33 no.3 Bahía Blanca July/Sept. 2003

 

Use of back-off computation in multilevel MPC

M. J. Arbiza, J. A. Bandoni and J. L. Figueroa1

Planta Piloto de Ingeniería Química (PLAPIQUI-UNS-CONICET), Camino la Carrindanga Km. 7, 8000, Bahía Blanca, ARGENTINA
1 Also with Dto. de Ing. Eléctrica y Computadoras, Univ. Nac. del Sur, Argentina
figueroa@uns.edu.ar

Abstract ¾ The desired operating point in Model Predictive Control is determined by a local steady-state optimization, which may be based on an eco-nomic objective. In this paper we proposes the solu-tion of a linear dynamic back-off problem to obtain a hierarchical scheme that ensures feasible operation in despite of disturbances. This is performed by computing the critical disturbances and expanding the optimization problem to ensure the existence of a control action that ensures the rejection of each perturbation.

Keywords ¾ Model Predictive Control. Process Optimization.

I. INTRODUCTION

Model Predictive Control (MPC) refers to a class of computer implemented mathematical algorithms that control the future behavior of a plant through the use of an explicit process model. At each control interval the MPC algorithm computes in an open-loop mode a se-quence of adjustments on manipulated variables, in or-der to optimize the future plant behavior under process constraints. The first input in the optimal sequence is injected into the plant, and the entire optimization is repeated at subsequent control intervals. In the modern processing plants the MPC controller is part of a multi-level hierarchy of control functions (Qin and Badgwell, 1997), as it is illustrated in Fig. 1. Several other authors (Richalet et al., 1978; Prett and Garcia, 1988) have de-scribed similar hierarchical structures.
The second stage of this hierarchy (the unit opti-mizer) computes an optimal steady-state point and passes it to the dynamic constraint control for its im-plementation. This desired operating point is usually determined by a local steady-state optimization, which may be based on an economic objective and a linear model. Typically, the resulting point lies at the bound-ary of the operative region (i.e., it is at the intersection of several active constraints, as many as the number of optimization variables). The underlying idea is that the controller provides perfect control, so that the plant re-mains at, or at least close to, its nominal operating point in spite of disturbances, parameter variations and un-certainty in the plant characteristics. This is a clearly unrealistic scenario, given that in a practical situation a plant cannot be operated at its nominal optimum. A pos-sible solution to overcome this practical limitation is to take a safety margin by strengthening the constraints (i.e., by reducing the feasibility region), and moving the desired operating point away from the actual plant con-straints. In absence of information about how distur-bances affect the steady-state point, this over design is hard to justify on economical grounds.
In this paper we present an alternative procedure to compute the operating point that guarantee feasible op-eration in spite of process disturbances. The main idea is to move the operating point away from the boundary of the feasibility region by considering the effect that the expected disturbances will have on the plant operation. This movement is referred in the literature as back-off. It was originally motivated by the desire of evaluating and comparing control strategies and process designs on the basis of their economic impact (Bandoni et al., 1994, Perkins and Walsh, 1994; Figueroa et al., 1994).
In general terms, the back-off problem consists in the optimization of a steady state objective function subject to dynamic constraints in the presence of proc-ess disturbances. Through this procedure, we ensure that the process operates at the optimal level of the defined performance objective function, with no constraint vio-lations at the control level. In practice, the back-off problem is usually solved by finding an operative point that guarantee plant operation for the "worst case" of the disturbances, in the sense they produce the largest constraint violation.


Fig. 1. Hierarchy of Control System.

A strategy to compute the nonlinear steady-state back-off was developed by Bandoni et al. (1994) by writing the optimization problem as one of semi-infinite programming. This algorithm was extended to the dy-namic case by Figueroa et al. (1996). Due to the large computational effort necessary to solve the nonlinear optimization problem some algorithms were proposed by Figueroa and Desages (1998) and Raspanti and Fi-gueroa (2001) by approximating the model using Piecewise lineal models. Loeblein and Perkins (1999) proposed a methodology to evaluate the back off under unconstrained MPC regulatory control for a stochastic description of disturbances, to perform this analysis it is necessary to assumed the disturbance as Gaussian noise with known statistics.
The paper is organized as follows, in Section 2 the model predictive control formulation is described. The optimization structure of Level 2 is presented in Section 3. An application example is developed in Section 4 and the paper ends with some conclusions in Section 5.

II. MODEL PREDICTIVE CONTROL FORMULATION

In this paper, we will assume that the underlying system is the following discrete linear system,

(1)

where x is the state vector, y is the measured output vector, us is the vector of optimization variables that determines the operating condition (computed in level 2), uc is the vector of manipulate variables and d is the vector representing the disturbances. The domains of the signals are assumed as follow,

(2)

Using this model structure, the control problem to be solved is to compute a sequence of inputs {uc[k+l], l=1,…,M), that will take the process from its current status x[k] to a desired steady-state xs. The MPC pro-blem is written as,

(3)

where P is the output horizon, M is the control horizon, zc is a set of nc inequality constraints, xs is the desired value for the state, D uc[k] (uc[k]-uc[k-1]) is the move-ment in the manipulate variable and w[k] is a bias term that compares the current predicted state x[k] to the cur-rent measured state xm[k] (i.e., w[k + l] = xm[k] - x[k]). for l=1,2,…,P).

At each iteration, the measure of the actual process state is feedback when w[k+l] is computed to be used at the next sample time. In the solution of this problem it is usual consider no disturbances along the control hori-zon (i.e. d[k+j]=0, j=1,M). It is possible to include in the vector zc some constraints that ensures closed loop stability for the control law (de Olivera and Morari, 2000).

II. DYNAMIC OPTIMIZATION FOR LEVEL 2.

Usually, as it is mentioned above, the desired operating point is determined by a local steady-state optimization for the undisturbed system (i.e, d[k]=0) with not control action applied (i.e, D uc[k]=0). This optimization may be based on an economic objective. Mathematically, this problem is written as,

(4)

Typically, the resulting point lies in the boundary of the operating region (i.e., it is at the intersection as many active constraints as the dimension of the optimi-zation variables). The underlying idea is that the con-troller provides perfect control, so that the plant remains at its nominal operating point in spite of disturbance.
In this paper we suggest an alternative to compute the operating point. The main idea is to move the oper-ating point away from the boundary of the feasibility region by considering the effect that the expected dis-turbances will have in the operation of the plant. This is called back-off and it is motivated by the desire of evaluating and comparing control strategies and process designs on the basis of their economic impact (Figueroa, et al., 1994).
In general the Back-off problem is defined as the optimization of a steady state objective function subject to dynamic constraints when disturbances are present. In this paper context, this is mathematically written as,

(5)

where x[0] means the vector x at the steady-state (for the undisturbed system and without control action) and cont(x[k]) is an expression for a general controller. Note that in this problem there is a usual assumption that the control algorithm is implicit in the dynamic model. The objective function has an economic mean-ing and it is computed at steady state. In our case, it is quadratic, because in this way it is possible to represent the economic cost for process operation with lower mathematical complexity. The set of possible distur-bances is constrained to be of bounded amplitude. Fi-nally, the initial condition for disturbances and control action are considered equal to zero (i.e, d[k]=0 and D uc[k]=0)..
The objective function is evaluated at the initial time, considering that the plant is in steady state and free of disturbances (x[k + 1] = x[k], d[k] = 0 and uc[k] = 0). Let us consider that (I-A)-1 exists (condition that is true for non-integrating process). Under these assumptions the steady-state vector is x = (I - A)-1Bsus. This implies that the objective function could be written as,

(6)

where

and

.

Now, assuming the use of the MPC control structure defined in previous section, let us analyze the dynamic constraints. Starting from the valor of x[k] it is possible to solve recursively the dynamic model for an horizon of P future samples as,

X[k] = Qx(k) + Gsus + GcUc[k] + GdD[k] (7)

with

, , ,
, ,
and .

Using this notation for the constraints, we obtain,

z[k] = YcQx[k] + (YcGs + Wcs)us + (YcGc + Wcc)Uc[k] + (YcGd + Wcd)D[k] + xc £ 0

where Yc = diag{Cc, ..., Cc}, Wcs = [DTcs, ..., DTcs]T, Wcc = diag{Dcc, ..., Dcc}, Wcd = diag{Dcd, ..., Dcd} and xc = [ETc, ..., ETc]T. Then,

z[k] = Xx[k] + Xsus + XcUc[k] + XdD[k] + xc £ 0 (8)

where X = YcQ, Xs = (YcGs + Wcs), Xc = (YcGc + Wcc) and Xd = (YcGd + Wcd).

In the solution of the back-off problem it is a usual practice to define the control algorithm and compute "the worst" disturbance in the sense of producing the largest violation of the constraints. In our case, since that we use a MPC scheme, we should consider "the worst" movement from the steady state due to distur-bance effect when "the best" control is applied. Then, we are interested in solving , where an optimization should be solved for each row (j) of the matrix [z[0]]j. The domain of maximization corre-sponding to the disturbances moves between -1 and +1. Now, if we consider that this is a set of linear problems (one for each row of z[0]) and that the optimization variables (D,Uc) are not related, this is equivalent to solve for each row .
The solution of the first term could be found independ-ently computing for each row, considering that the large value of each row will be obtain for the values of D[0] that produces the largest contribution on [Xd D[0]]j. This coincides with the values of D[0]=± 1 correspond-ing with the sign of X d, i.e.,

(9)

Obviously, this vector defines "the worst case" at each instant and in each constraint. Now, in the present problem of compute back-off under MPC structure, we are interested in obtaining a value of us and the corre-sponding values of the control action Uc[k] in order to obtain the maximum of the steady-state objective func-tion without constraint violations. This is equivalent to solve

(10)

In this problem, the constraints should be satisfied for all disturbances. This implies that at the operating point should exist "a control" that rejects "each disturbance". We can write this in mathematical terms as,

(11)

where x[0] has been replace for their steady-state value. In the constraints in expression (11) it is implicit the presence of the "worst disturbances" in the sense of pro-duce the largest violation of the constraints at any time. Now, in operation, each of these disturbances will re-quire a particular control action to reject it. Let us de-fined the rows of z[0] associated with the jth constraint as, , with j=1, .., Pnc1(1This implies add a row for each sample time and for each constraint). Then, in the following we proposed to consider a particular control action (Ujc[0]) to compensate each row. This is, we can write the problem of control exis-tence as,

for j=1,…,Pnc; or, in a matricial form, as

Aopuop + bop £ 0 (12)

where

,
and .

The first set of inequalities is included to force the proc-ess to verify the steady state equations. This problem could be solved as a standard Quadratic Problem as

(13)

where we obtain a particular control, Ujc[0], for each "worst case" disturbance. In next section, we will use this optimization formulation in the MPC formulation for an illustrative example.

IV. EXAMPLE

The case study considered in this section consists of two continuous stirred tank reactors (CSTR) in series, with an intermediate mixer introducing a second feed (de Hennin and Perkins, 1993; de Hennin, et al., 1994; Fi-gueroa, 2000), as shown in Fig. 2. A single irreversible, exothermic, first order reaction A ® B takes place in both reactors. The dynamic model of these reactions is

where some algebraic relations are defined as

, , , and


Fig. 2. Flowsheet Example

Table 1. Parameters and Variables

The process parameters and variables are defined in Table 1. Also in this Table are present the bounds for some variables. The state variables are the concentra-tion and temperatures in both reactors (x = [C1 T1 C2 T2]T), the optimization variables are the first and second feed flowrates and cooling tem-perature for both reactors (), the manipulated variables are the cooling temperature for both reactors and the disturbances are the composition and the temperature in both feeds . The objective function for the optimization of the level 2 is to maximize the operation profit, expressed as,

There are the following constraints in this process:

Security constraint:
T1£ 350 T2£ 350;

Production limitations:
QF1+QM£ 0.8 QF1³ 0.05 QM³ 0.05

Process limitations:
200£ T1ci£ 300 200£ T2ci£ 300
200£ T1co£ 310 200£ T2co£ 310
F1cw£ 2 F2cw£ 2

Product specifications: C2£ 0.3.

The initial values for optimization and output vari-ables are the ones became from the global optimization of Level 1:

us=[0.2062 0.3352 250 250]T

x=[0.1455 350 0.2105 332.1]T.

It is important to remark that the operation for these values is nominally (i.e. with not disturbances) feasible. When perturbation are presented this operating point becomes not feasible due to violation some constraints (T1co>310 and T2co>310).
Using the linearized model, the solution of problem (13) modifies this operative point to make its perma-nently feasible optimum for the set of possible distur-bances,

us=[0.53 0.27 252.13 294.27]

x=[0.356 342.32 0.197 336.83].

Figures 3-6 show the dynamic response of the process with the MPC control algorithm when step distur-bances in booth feed temperatures are applied. The showed plot represents the behavior of the Temperature in first and second reactor (Fig. 3 and 4, respectively) and for the input temperature in the cooler flow for both reactors (Fig. 5 and 6). In all cases the process variables do not exhibit constraint violation, so we can say that we have optimized and controlled the process success-fully.


Fig. 3. Temperature in first reactor


Fig. 4. Temperature in second reactor


Fig. 5. Cooler temperature in first reactor


Fig. 6. Cooler temperature in second reactor

V. CONCLUSIONS

In this paper the problem of determining the optimal operating point of a process under MPC control is pre-sented. In particular, the presence of the disturbance was considered to ensure not constraints violation in presence of perturbation. To obtain it, the local optimi-zation approach for level 2 in MPC is replaced by a back-off algorithm. This algorithm was modified in order to allow the incorporation of a MPC scheme by including in the optimization problem as many control sequences as the number of critical perturbations. The resulting scheme is applied to a flowsheet example.

REFERENCES
1. Bandoni, J.A., J.A. Romagnoli and G.W. Barton, "On optimising Control and the Effect of Disturbances: Calculations of the Open-Loop Back-offs", Computers and Chem. Eng. 18/S, 505-509 (1994).         [ Links ]
2. de Olivera, S.L. and M. Morari, "Contractive Model Predictive Control for Constrained Nonlinear Systems", IEEE Trans. on Automatic Control 45, 1053-1070 (2000).         [ Links ]
3. de Hennin, S.R. and J.D. Perkins, "Structural decisions in on-line optimization", Technical Report B93-37, Imperial College, London (1993).         [ Links ]
4. de Hennin, S., J.D. Perkins and G.W. Barton, "Structural decisions in on-line optimization", Proc. PSE '94, Korea., 297-302 (1994).         [ Links ]
5. Figueroa, J.L., P.A. Bahri and J.A. Romagnoli, "Economic Impact of Disturbances in Chemical Processes- A Dynamic Analysis", Interactions Between Design and Process Control, Pergamon Press, Baltimore, 141-146 (1994).         [ Links ]
6. Figueroa, J.L., P.A. Bahri, J.A. Bandoni and J.A. Romagnoli, "Economic Impact of Disturbances and Uncertain Parameters in Chemical Processes- A Dynamic Analysis", Computers and Chem. Eng. 20, 453-461 (1996).         [ Links ]
7. Figueroa, J.L. and A.C. Desages, "Use of Piecewise Linear Approximations for Steady-State Back-off Analysis", Optimal Control: Applications and Methods 19, 93-110 (1998).         [ Links ]
8. Figueroa, J.L., "Economic Performance of Variable Structure Control: A Case Study", Computers and Chem. Eng. 24, 1821-1827 (2000).         [ Links ]
9. Loeblein, C. and J.D. Perkins, "Structural Design for On-Line Process Optimization: I. Dynamic Economics of MPC", AIChE Journal 45, 1018-1029 (1999).         [ Links ]
10. Perkins, J.D. and S.P.K. Walsh, "Optimization as a Tool for Design/Control Integration", Interactions Between Design and Process Control, Pergamon Press, Baltimore, 1-10 (1994).         [ Links ]
11. Prett, D. and C. Garcia Fundamental Process Control, Butterworth-Heinemann Series in Chemical Engng (1988).         [ Links ]
12. Qin, S.J. and T.A. Badgwell, "An Overview of Industrial Model Predictive Control Technology", Fifth International Conference on Chemical Process Control, AIChE Symposium Series, Eds. Kantor, Garcia and Carnahan, 93, 232-256 (1997).         [ Links ]
13. Raspanti, C.G. and J.L Figueroa, "Use of a CPWL Approach for Operativity Analysis: A Case Study", Latin American Applied Research 31, 73-78 (2001).         [ Links ]
14. Richalet, J., A. Rault, J.L. Testud and J. Papon, "Model predictive heuristic control: Applications to industrial processes", Automatica 14, 413-428 (1978).
        [ Links ]

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License