## Servicios Personalizados

## Articulo

## Indicadores

- Citado por SciELO

## Links relacionados

- Similares en SciELO

## Compartir

## Latin American applied research

##
*versión impresa* ISSN 0327-0793

### Lat. Am. appl. res. v.33 n.4 Bahía Blanca oct./dic. 2003

**Hammerstein an Wiener model identification using rational orthonormal bases**

**J. C. Gómez ^{1} and E. Baeyens^{2}**

^{1} *Laboratory for System Dynamics, FCEIA, Universidad Nacional de Rosario Riobamba 245 Bis, 2000 Rosario, Argentina e-mail: jcgomez@fceia.unr.edu.ar*

^{2}

*Depto. de Ing. de Sistemas y Automática, ETSII, Universidad de Valladolid Paseo del Cauce s/n, 47011 Valladolid, Spain*

e-mail: enrbae@eis.uva.es

e-mail: enrbae@eis.uva.es

** Abstract** ¾

**In this paper, non iterative algorithms for the identification of (multivariable) Hammerstein and Wiener systems are presented. The proposed algorithms are numerically robust, since they are based only on least squares estimation and singular value decomposition. For the Hammerstein model, the algorithm provides consistent estimates even in the presence of coloured output noise, under weak assumptions on the persistency of excitation of the inputs. For the Wiener model, consistency of the estimates can only be guaranteed in the noise free case. Key in the derivation of the results is the use of rational orthonormal bases for the representation of the linear part of the systems.**

** Keywords** ¾

**Hammerstein and Wiener Models. Nonlinear Identification. Singular Value Decomposition.**

**I. INTRODUCTION**

In the last decades, a considerable amount of research has been carried out on modelling, identification, and control of nonlinear systems. Most dynamical systems can be better represented by nonlinear models, which are able to describe the global behaviour of the system over the whole operating range, rather than by linear ones that are only able to approximate the system around a given operating point. One of the most frequently studied classes of nonlinear models are the so called block-oriented nonlinear models (Pearson and Pottmann, 2000), which consist of the interconnection of Linear Time Invariant (LTI) systems and static (memoryless) nonlinearities. Within this class, two of the more common model structures are:

- the
**Hammerstein**model, which consists of the cascade connection of a static (memoryless) nonlinearity followed by a LTI system (see for instance (Eskinat et al., 1991) for a review on identification of Hammerstein models), and - the
**Wiener**model, in which the order of the linear and the nonlinear blocks in the cascade connection is reversed (see for instance (Greblicki, 1994), (Wigren, 1993), (Wigren, 1994) for different methods for the identification of Wiener models).

These model structures have been successfully used to represent nonlinear systems in a number of practical applications in the areas of chemical processes (Eskinat et al., 1991), (Pearson and Pottmann, 2000), (Kalafatis et al., 1995), (Chou et al., 2000), biological processes (Korenberg, 1978), signal processing, communications, and control (Fruzzetti et al., 1997).

Several techniques have been proposed in the literature for the identification of Hammerstein and Wiener models. The reader is referred to (Narendra and Gall-man, 1966), (Billings, 1980), (Billings and Fakhouri, 1982), (Eskinat et al., 1991), (Greblicki and Pawlak, 1989), and the references therein, for identification of Hammerstein models; and to (Billings, 1980), (Wigren, 1993), (Wigren, 1994), (Greblicki, 1994), (Hagenblad and Ljung, 2000), and the references therein, for identification of Wiener models. For the purpose of putting into context the present work, three main approaches for the identification of Hammerstein and Wiener models will be distinguished. The first one is the traditional iterative algorithm proposed by Narendra and Gallman in (Narendra and Gallman, 1966). In this algorithm, an appropriate parameterization of the system allows the prediction error to be separately linear in each set of parameters characterizing the linear and the nonlinear parts. The estimation is then carried out by minimizing alternatively with respect to each set of parameters, a quadratic criterion on the prediction errors. A second approach, based on correlation techniques, is introduced in (Billings, 1980). This method relies on a separation principle, but with the rather restrictive requirement on the input to be white noise. A more recent approach for the identification of single-input/single-output (SISO) Hammerstein-Wiener systems has been introduced by Bai in (Bai, 1998). This algorithm is based on Least Squares Estimation (LSE) and Singular Value Decomposition (SVD), but consistency of the estimates can only be guaranteed for the case of the disturbances being white noise, or in the noise-free case. Inspired by the work in (Bai, 1998), Gómez and Baeyens (Gómez and Baeyens, 2000) proposed a noniterative algorithm for the identification of Hammerstein models, which, in contrast to (Bai, 1998), applies also to multivariable systems and where the consistency of the estimates is guaranteed even in the presence of coloured output noise. As in (Bai, 1998), the main computational tools employed by the algorithm are LSE and SVD, which results in numerical robustness under weak assumptions on the persistency of excitation of the inputs^{1 }. Key on the derivations of these results is the use of orthonormal basis functions for the representation of the linear part of the Hammerstein model.

In recent years, there has been a lot of research on the issue of how to introduce a priori information in the identification of black box LTI model structures. A natural answer to this problem has been the use of rational orthonormal bases for the representation of the system. Choosing the poles of the bases close to the (approximately known) system poles the accuracy of the estimate can be considerably improved (see (Gómez, 1998) for a detailed review of the use of Orthonormal Bases in Identification of LTI Systems). It is not intended to give here a complete overview on Identification using Rational Orthonormal Bases, and the reader is referred to (Gómez, 1998), (Ninness and Gustafsson, 1997), (Wahlberg, 1991), (Wahlberg, 1994), (Van den Hof et al., 1995), and the references therein. In addition, the use of orthonormal bases leads to a linear regressor model, so that least squares techniques can be used for the parameter estimation. Furthermore, since the regressors only depend on past inputs, the estimate is consistent even if the output is corrupted by coloured noise, under the assumption that the actual system belongs to the model class (i.e., there is no undermodelling).

In this paper, basis function expansions are used to represent both the linear and the nonlinear parts of Hammerstein and Wiener systems. For the Hammerstein model, this parameterization results in a linear regressor form so that least squares techniques can be used to estimate an oversized parameter matrix. Then, by recurring to Singular Value Decomposition and rank reduction, optimal estimates of the parameter matrices characterizing the linear and nonlinear parts can be obtained. For the Wiener model, the parameterization also results in a linear regressor from where the parameters characterizing the linear and the nonlinear parts can be estimated using only least squares techniques.

In comparison with other works, the proposed algorithms have the following advantages: **1.** They apply to **multivariable** Hammerstein and Wiener models, **2.** No special assumptions on the inputs, other than the standard persistency of excitation conditions, are required, **3.** For the case of the Hammerstein model, the algorithm provides consistent estimates even in the presence of coloured noise, while for the Wiener model, the algorithm provides consistent estimates only in the noise free case.

The rest of the paper is organized as follows. In Section II, the multivariable Hammerstein model is introduced, the identification problem is formulated, and the optimal identification algorithm is derived. The same is done in Section III for multivariable Wiener models. Simulation examples illustrating the performance of the algorithms are presented in Section IV, and finally, some concluding remarks are provided in Section V.

**II. HAMMERSTEIN MODEL IDENTIFICATION**

**A. Problem Formulation**

A (multivariable) Hammerstein model is schematically represented in Fig. 1. The model consists of a zero-memory nonlinear element in cascade with a LTI system with transfer function (matrix)^{2 }*G*(*q*) Î *H*_{2}^{m}^{´}* ^{n}*(). It is assumed that the measured output

*y*contains an unknown additive noise component

_{k}*v*. The input-output relationship is then given by

_{k}, (1)

where *y _{k}* Î

*,*

^{m}*u*Î

_{k}*, and*

^{n}*v*Î

_{k}*, are the system output, input, and measurement noise vectors at time*

^{m}*k*, respectively. It will be assumed that the nonlinear block can be described as

, (2) |

where *g _{i}*(×) :

*®*

^{n}*, (*

^{n}*i*= 1,...,

*r*), are known (nonlinear) basis functions, and

*a*Î

_{i}

^{n}^{´}

*, (*

^{n}*i*= 1, ...,

*r*), are unknown matrix parameters. Typically, the nonlinear basis functions

*g*(×) are polynomials that allows the representation of smooth nonlinearities

_{i}^{3}, but they can also be Radial Basis Functions (RBF) or basis functions generated from a mother function (e.g., wavelets). It is not the intention of this paper to give a complete overview of nonlinear approximation using basis functions, and the reader is referred to the survey papers (Sjöberg et al., 1995), (Juditsky et al., 1995), and the references therein.

Figure 1: Multivariable Hammerstein Model.

On the other hand, the LTI subsystem will be represented using rational orthonormal bases as follows

, (3) |

where Î ^{m}^{´}* ^{m}* are unknown matrix parameters, and are rational orthonormal bases

^{4}on

*H*

_{2}().

The identification problem is to estimate the unknown parameter matrices *a _{i}*, (

*i*=1, ...,

*r*), and , (=0, ...,

*p*- 1), characterizing the nonlinear and the linear parts, respectively, from an N-point data set of observed input-output measurements.

**B. Identification Algorithm**

Substituting equations (2) and (3) in (1), the input-output relationship can be written as

, (4) | |

. (5) |

It is clear from equation (5) that the parameterization (2)-(3) is not unique, since any parameter matrices a and a^{-1}*a _{i}*, for some nonsingular matrix a Î

^{n}^{´}

*, provide the same input-output equation (5). In other words, any identification experiment can not distinguish between the parameters (,*

^{n}*a*) and (a, a

_{i}^{-1}

*a*). As it is common in the literature (Bai, 1998), these two sets of parameters will be called equivalent. To obtain a one-to-one parameterization, i.e. for the system to be identiï¬able, additional constrains must be imposed on the parameters. A technique that is often used to obtain uniqueness is to normalize the parameter matrices

_{i}*a*(or ), that is to assume for instance that (or equivalently ).

_{i}A similar methodology was employed in (Bai, 1998) for a scalar Hammerstein-Wiener model. Under this assumption the parameterization (2)-(3) is unique.

Defining now

(6)

, (7) |

equation (5) can be written as

, (8)

which is in linear regression form. Considering the N-point data set, the last equation, and deï¬ning

, (9) | |

, (10) | |

, (11) |

the following equation can be written

. (12)

It is well known (Ljung, 1999) that the estimate that minimizes a quadratic criterion on the prediction errors (that is, the least squares estimate) is given by

, (13)

provided the indicated inverse exists^{5} (Ljung, 1999),(Södeström and Stoica, 1989).

The problem is how to estimate the parameter matrices *a _{i}*,(

*i*=1, ...,

*r*), and ,( = 0, ...,

*p*- 1) from the estimate in (13). From the definition of q in (6), it is easy to see that q = , where is the block column matrix obtained by stacking the block columns of Q

*on top of each other, and where Q*

_{ab}*has been defined as*

_{ab}, (14) |

with the following definitions for the matrices *a* and *b*,

*a* =[*a*_{1},*a*_{2}, ..., *a _{r}*], (15)

. (16)

An estimate of the matrix Q* _{ab}* can then be obtained from the estimate in (13). The problem now is how to estimate the parameter matrices

*a*and

*b*from the estimate . It is clear that the closest, in the 2-norm

^{6}sense, estimates and are such they minimize the norm

. (17) |

That is,

. (18) |

The solution to this optimization problem is provided by the Singular Value Decomposition (SVD) (Golub and Van Loan, 1989) of the matrix . The result is summarized in the following Theorem.

**Theorem 1** Let Î ^{nr}^{´}* ^{mp}* have rank

*k*>

*n*, and let the economy-size SVD of be given by

(19) |

where is a diagonal matrix containing the *k* nonzero singular values (s* _{i}*,

*i*= 1, ...,

*k*) of in nonincreasing order, and where the matrices

*U*= [

_{k}*u*

_{1}

*u*

_{2}...

*u*] Î

_{k}

^{nr}^{´}

*and*

^{k}*V*= [

_{k}*v*

_{1}

*v*

_{2}...

*b*] Î

_{k}

^{mp}^{´}

*contain only the first*

^{k}*k*columns of the unitary matrices

*U*Î

^{nr}^{´}

*and*

^{nr}*V*Î

^{mp}^{´}

*provided by the full SVD of ,*

^{mp} = *U*S*V ^{T}*, (20)

respectively^{7}. Then, the matrices Î ^{nr}^{´}* ^{n}* and Î

^{mp}^{´}

*that minimize the norm are given by*

^{n}(21) |

where *U*_{1} Î ^{nr}^{´}* ^{n}*,

*V*

_{1}Î

^{mp}^{´}

*and S*

^{n}_{1}= diag {s

_{1}, s

_{2}, ..., s

*} are given by the following partition of the*

_{n}*economy-size*SVD in (19),

(22) |

and the approximation error is given by

(23) |

**Proof:** See APPENDIX.

Based on this result, the nonlinear identification algorithm can then be summarized as follows.

**Algorithm 1**

*Step 1*: Compute the least squares estimate as in (13), and the matrix such that

. (24) |

*Step 2*: Compute the economy-size SVD of as in Theorem 1, and the partition of this decomposition as in equation (22).

*Step 3*: Compute the estimates of the parameter matrices *a* and *b* as = *U*_{1} and = *V*_{1}S_{1}, respectively.

An important issue in any identification method is that of the consistency of the estimates, i.e. the convergence of the estimated parameters to the true values as the number of data points *N* tends to infinity. Suppose that the real system belongs to the model class (defined by equations (1)-(8)). Therefore, the observed data have actually been generated by

, (25)

for some sequence , where q_{0} can be considered as the *true* parameter vector. Since the regressors f* _{k}* depend only on past inputs, then they are uncorrelated from the noise. It is well known (Ljung, 1999) that, under these conditions, the least squares estimate is strongly consistent, in the sense that converges (with probability one) to q

_{0}as

*N*® ¥, under the assumption on persistency of excitation of the regressors. Moreover, the consistency of the estimate holds even in the presence of coloured noise. The convergence of the estimate implies that of and . The result is summarized in the following Theorem.

**Theorem 2** Let and be computed using the identifiï¬cation Algorithm 1. Then, under the uniqueness condition, and the assumption on persistency of excitation of the regressors, , and as *N* tends to infinity. The result holds even in the presence of coloured noise.

**Proof:** See APPENDIX.

**III. WIENER MODEL IDENTIFICATION**

**A. Problem Formulation**

A (multivariable) Wiener model is schematically depicted in Fig. 2. The model consists of the cascade of a LTI system with transfer function (matrix)

Figure 2: Multivariable Wiener Model.

*G*(*z*) Î H_{2}^{m}^{´}* ^{n}*(), followed by a zero-memory nonlinear element with input-output characteristic given by . In this case,

*y*Î

_{k}*,*

^{m}*u*Î

_{k}*, and*

^{n}*v*Î

_{k}*, represent the system output, input, and process noise vectors at time*

^{m}*k*, respectively.

As in the case of Hammerstein models, it will be assumed that the LTI subsystem is represented as an orthonormal basis expansion of the form (3). On the other hand, the nonlinear function : * ^{m}* ®

*will be assumed to be invertible*

^{m}^{8,9}, and that its inverse can be described as

(26) |

where *g _{i}*(×) :

*®*

^{m}*,(*

^{m}*i*= 1, ...,

*r*), are known basis functions, and

*a*Î

_{i}

^{m}^{´}

*, (*

^{m}*i*= 1, ...,

*r*), are unknown matrix parameters. Without loss of generality, it will also be assumed that

*a*, with

_{1}= I_{m}*I*standing for the identity matrix of dimensions (

_{m}*m*´

*m*).

The identification problem is to estimate the unknown parameter matrices *a _{i}*, (

*i*= 2, ...,

*r*), and , ( = 0, ...,

*p*- 1), characterizing the nonlinear and the linear parts, respectively, from an

*N*-point data set of observed input-output measurements.

**B. Identification Algorithm**

The intermediate variable vk in Fig. 2, can be written as

*v _{k}* =

*G*(

*q*)

*u*+

_{k}*v*, (27)

_{k}and also as

. (28)

Equating the right hand sides of the above two equations, and considering the parameterizations (3) and (26) of the linear and the nonlinear subsystems, respectively, the following equation is obtained

(29) |

which is a linear regression. Deï¬ning

, (30) | |

, (31) |

equation (29) can be written as

*g*_{1}(*y _{k}*) = q

*f*

^{T}*+*

_{k}*k*. (32)

Now, an estimate of q can be computed by minimizing a quadratic criterion on the prediction errors (i.e., the least squares estimate). It is well known (Ljung, 1999) that this estimate is given by^{10}

, (33)

where the following definitions have been made

, (34) | |

, (35) | |

. (36) |

Now, estimates of the parameters *a _{i}*, (

*i*=2, ...,

*r*), and , ( = 0, ...,

*p*- 1), can be computed by partitioning the estimate in (33), according to the definition of q in (30).

In this case, the consistency of the estimate in (33), can only be guaranteed in the noise free case, since in the presence of noise the regressors {f* _{k}*} at time

*k*will be correlated with the disturbances {

*v*} at the same instant, even if the disturbance is a white noise process (Ljung, 1999).

_{k} **IV. SIMULATION EXAMPLES**

The performance of the proposed identification algorithms is illustrated through two simulation examples.

**Example 1 (Hammerstein model)**

The nonlinear *true* system consists of a third order linear discrete system with transfer function

, (37) |

preceded by a static nonlinearity described by a fourth order polynomial of the form

. (38)

The nonlinear characteristic is shown as curve A (solid line) in Fig. 3. The system was excited with the input

(39) |

Figure 3: True (Curve A: solid line) and Estimated (Curve B: dashed line) nonlinear characteristic (indistinguishable one from the other).

where g* _{k}* is a zero mean white Gaussian process with variance 10

^{-6}, and the output was corrupted with zero-mean coloured noise with spectrum .

For the purposes of identification, the linear subsystem was represented using the rational Orthonormal Bases with Fixed Poles (OBFP) studied in (Ninness and Gustafsson, 1997),(Gómez, 1998), that have the more common FIR, Laguerre (Wahlberg, 1991), and Kautz (Wahlberg, 1994) bases as special cases. The bases are defined as

and they allow prior knowledge about an arbitrary number of system modes to be incorporated in the identification process.

In this example, the poles of the bases were chosen at {-0.01, -0.2, -0.7}, so that a third order linear model was identified. The estimated transfer function was (compare with the *true* transfer function (37))

On the other hand, a fourth order polynomial was used to represent the nonlinear part of the model. The estimated nonlinear model was (compare with the true nonlinearity (38))

The estimated nonlinear characteristic is represented as curve B (dashed line) in Fig. 3. Finally, the measured (solid line) and estimated (dashed line) outputs are represented in Fig. 4, where a good agreement between them can be observed (they are almost indistinguishable one from the other).

Figure 4: Measured (solid line) and Estimated (dashed line) Outputs.

**Example 2 (pH Neutralization Process)**

In this example, a Wiener model is identified based on the simulation data of a pH neutralization process. The process consists of an acid (HNO_{3}) stream, a base (NaOH) stream, and a buffer (NaHCO_{3}) stream that are mixed in a constant volume (*V*) stirring tank. The process is schematically depicted in Fig. 5, and corresponds to a bench-scale plant at the University of California, Santa Barbara(see (Henson and Seborg, 1992), (Henson and Seborg, 1994), (Henson and Seborg, 1997)).

Figure 5: Schematic representation of the pH neutralization process.

The inputs to the system are the base flow rate (*u*_{1}) and the buffer flow rate (*u*_{2}), while the output (*y*) is the pH of the effluent solution in the tank. The acid flow rate (*u*_{3}), as well as the volume (*V*) of the tank are assumed to be constant. A simulation model, based on first principles, is derived in (Henson and Seborg, 1992) introducing two *reaction invariants* for each inlet stream ((*W _{a}*

_{1},

*W*

_{b}_{1}) for the base stream, (

*W*

_{a}_{2},

*W*

_{b}_{2}) for the buffer stream, (

*W*

_{a}_{3},

*W*

_{b}_{3}) for the acid stream, and (

*W*,

_{a}*W*) for the effluent solution). The dynamic model for the reaction invariants of the effluent solution (

_{b}*W*,

_{a}*W*), in state-space form, is given by:

_{b}, (40)

, (41)

where

The nominal operating conditions of the system are given in (Henson and Seborg, 1992), (Henson and Seborg, 1994), (Henson and Seborg, 1997).

For the purposes of identification, the model was excited with band limited white noise around the nominal values of the base and buffer flow rates. The first six hundred data were used for the estimation of the model, while the following five hundred data were used for validation purposes. The linear subsystem was represented using the same rational Orthonormal Bases with Fixed Poles (OBFP) as in the previous example, with poles at {0.978, 0.9897, 0.9897, 0.99, 0.9784}, while a third order polynomial was used to represent the nonlinear part of the model. The true and estimated output (estimation-validation data) are represented in the top plot of Fig. 6, where a good agreement between them can be observed. The estimated nonlinear characteristic is represented in the bottom plot of Fig. 6.

**V. CONCLUDING REMARKS**

In this paper, noniterative methods for the identification of multivariable Hammerstein and Wiener systems have been presented. The proposed algorithms are numerically robust, since they rely only on LSE and SVD. For the case of the Hammerstein model, the algorithm provides consistent estimates under weak assumptions on the persistency of excitation of the inputs, even in the presence of coloured noise. For the case of the Wiener model, consistency of the estimates can be guaranteed only for the noise free case. The key issue in the derivation of the results is the representation of the linear part of the system using orthonormal basis functions which allows to put the system in linear regressor form. In addition, the use of rational orthonormal bases allows *a priori* information one can

Figure 6: Top plot: True (solid line) and Estimated (dashed line) Output (Estimation-Validation Data). Bottom plot: Estimated nonlinear characteristic.

have about the dominant dynamics of the system, to be incorporated in the identification process, to improve the estimation accuracy.

^{1} This is actually not a restriction, since it is clear that any identification algorithm requires some degree of persistency of excitation of the inputs. One can only identify the system modes that are sufficiently excited by the input and that can be observed from the output.

^{2} Here, *q* stands for the forward shift operator defined by *qx*_{x} = *x _{k}* + 1 , and H

_{2}

^{m}^{´}

*() is the Hardy space of (*

^{n}*m*´

*n*) transfer matrices whose elements are in H

_{2}(), the Hardy space of functions that are square integrable on the unit circle T, and analytic outside the unit disk. With some abuse of terminology H

_{2}

^{m}^{´}

*() will be referred as the space of all stable, causal, discrete-time, (*

^{n}*m*´

*n*) transfer matrices.

^{3}Any smooth function in an interval can be represented with arbitrary accuracy by a polynomial of sufficiently high order.

^{4}The bases are orthonormal in the sense that , where is the Kronecker delta, and is the standard inner product in

*L*

_{2}(), defiï¬ned as

^{5}The inverse exists, provided that the regressors f

*are persistently exciting (PE) in the sense that there exist some integer*

_{k}_{0}, and positive constants a

_{1}and a

_{2}such that

^{6}The 2-norm of a matrix A = (

*a*)

_{ij}_{(m}

_{´}

_{n)}is the norm induced by the 2-norm (or Euclidean norm) of vectors

^{7}In equation (20), the matrix S Î

^{nr}^{´}

*is given by*

^{mp}; for

*nr*³

*mp*,

or

S = [S

*0]; for*

_{nr}*nr*£

*mp*.

^{8}As pointed out in (Pearson and Pottmann, 2000), this rules out the use of the proposed identification algorithm for processes in which the phenomenom of input multiplicity is present (see next footnote).

^{9}

*Input Multiplicity*is the situation in which more than one steady-state input value

*u*corresponds to the same steady-state output value

_{ss}*y*.

_{ss}^{10}Provided the indicated inverse exists.

**APPENDIX**

**Proof of Theorem 1** Let the Singular Value Decomposition of the matrix Î ^{nr}^{´}* ^{mp}* be given by

, (42) |

where *k* is the rank of . Appealing to Theorem 2.5.2 in (Golub and Van Loan, 1989), the rank-*n* matrix Q Î ^{nr}^{´}* ^{mp}* (

*n*<

*k*) which is closest, in the 2-norm sense, to is given by

, (43) |

and the approximation error is given by

, (44) |

Considering now the partition of the *economy-size* SVD of in (22), it is clear that

Q* _{n}* =

*U*

_{1}S

_{1}

*V*

_{1}

*= (*

^{T}*U*

_{1})(

*V*

_{1}S

_{1})

*,*

^{T} what concludes the proof, by equating = *U*_{1} and = *V*_{1}S_{1}.

**Proof of Theorem 2** The convergence of the estimate in (13) implies that ® Q* _{ab}*with probability one as

*N*tends to infinity (denoted ). Noting now that

(45) |

and taking into account that Q* _{ab}* is a rank

*n*matrix, then

as *N* tends to infinity. Now, from the uniqueness of the decomposition *ab** ^{T}*, it can be concluded that , and as

*N*tends to infinity, what concludes the proof.

**REFERENCES**

1. Bai, E. W., "An optimal two-stage identification algorithm for Hammerstein-Wiener nonlinear systems", *Automatica*, **34**(3), 333-338 (1998). [ Links ]

2. Billings, S., "Identification of nonlinear systems - A survey", *Proc. of IEEE*, *Part D*, **127**, 272-285 (1980). [ Links ]

3. Billings, S. and S. Fakhouri, "Identification of systems containing linear dynamic and static nonlinear elements", *Automatica*, **18**(1), 15-26 (1982). [ Links ]

4. Chou, C., H. Bloemen, V. Verdult, T. van den Boom, T. Backx, and M. Verhaegen. "Nonlinear identification of high purity distillation columns". In *Proc. of the IFAC Symposium on System Identification SYSID 2000*, 415- 420, Santa Barbara, CA (2000). [ Links ]

5. Eskinat, E., S. Johnson, and W. Luyben, "Use of Hammerstein models in identification of nonlinear systems", *AIChE Journal*, **37**(2), 255-268 (1991). [ Links ]

6. Fruzzetti, K., A. Palazoglu, and K. McDonald, "Nonlinear model predictive control using Hammerstein models", *Journal of Process Control*, **7**(1), 31-41 (1997). [ Links ]

7. Golub, G. and C. Van Loan. *Matrix Computations*. The Johns Hopkins University Press, Baltimore, 2nd edition (1989). [ Links ]

8. Gómez, J. C. *Analysis of Dynamic System Identification using Rational Orthonormal Bases*. PhD thesis, The University of Newcastle, Australia, PS file at http://fceia.unr.edu.ar/~jcgomez/ (1998). [ Links ]

9. Gómez, J. C. and E. Baeyens. "Identification of multivariable hammerstein systems using rational orthonormal bases". In *Proc. of the 39th. IEEE CDC*, 2849-2854, Sydney, Australia (2000). [ Links ]

10. Greblicki, W., "Nonparametric identification of Wiener systems by orthogonal series", *IEEE Trans. on Autom. Contr.*, **39**(10), 2077-2086 (1994). [ Links ]

11. Greblicki, W. and M. Pawlak, "Nonparametric identification of Hammerstein systems", *IEEE Trans. on Information Theory*, **35**(2), 409-418 (1989). [ Links ]

12. Hagenblad, A. and L. Ljung. "Maximum likelihood estimation of Wiener models". In *Proc. of the 39th IEEE CDC*, 712-713, Sydney, Australia (2000). [ Links ]

13. Henson, M. and D. Seborg. "Nonlinear adaptive control of a pH neutralization process". In *Proc. of the ACC*, 2586-2590 (1992). [ Links ]

14. Henson, M. and D. Seborg, "Adaptive nonlinear control of a pH neutralization process", *IEEE Trans. on Control Systems Technology*, **2**(3), 169-182 (1994). [ Links ]

15. Henson, M. and D. Seborg, editors. *Nonlinear Process Control*, chapter 4: "Feedback Linearizing Control". Prentice Hall PTR, N.J. (1997). [ Links ]

16. Juditsky, A., H. Hjalmarsson, A. Benveniste, B. Delyon, L. Ljung, J. Sjöberg, and Q. Zhang, "Nonlinear black-box modeling in system identification: Mathematical foundations", *Automatica*, **31**(12), 1725-1750 (1995). [ Links ]

17. Kalafatis, A., N. Arifin, L. Wang, and W. Cluett, "A new approach to the identification of pH processes based on the Wiener model", *Chemical Engineering Science*, **50**(23), 3693-3701 (1995). [ Links ]

18. Korenberg, M. "Identification of biological cascades of linear and static nonlinear systems". In *Proc. of the 16th. Midwest Symposium on Circuit Theory*, 2.1-2.9 (1978). [ Links ]

19. Ljung, L. *System Identification: Theory for the User*. Prentice Hall, Inc., New Jersey, 2nd edition (1999). [ Links ]

20. Narendra, K. and P. Gallman, "An iterative method for the identification of nonlinear systems using a Hammerstein model", *IEEE Trans. on Autom. Contr.*, **AC-11**, 546-550 (1966). [ Links ]

21. Ninness, B. and F. Gustafsson, "A unifying construction of orthonormal bases for system identification", *IEEE Trans. on Autom. Contr.*, **AC-42**(4), 515-521 (1997). [ Links ]

22. Pearson, R. and M. Pottmann, "Gray-box identification of block-oriented nonlinear models", *Journal of Process Control*, **10**, 301-315 (2000). [ Links ]

23. Sjöberg, J., Q. Zhang, L. Ljung, A. Benveniste, B. De-lyon, P. Glorennec, H. Hjalmarsson, and A. Juditsky, "Nonlinear black-box modeling in system identification: a unified approach", *Automatica*, **31**(12), 1691-1724 (1995). [ Links ]

24. Sodeström, T. and P. Stoica. *System Identification*. Prentice Hall, Inc., New Jersey (1989). [ Links ]

25. Van den Hof, P., P. Heuberger, and J. Bokor, "System identification with generalized orthonormal basis functions", *Automatica*, **31**(12), 1821-1834 (1995). [ Links ]

26. Wahlberg, B., "System identification using Laguerre models", *IEEE Trans. on Autom. Contr.*, **AC-36**(5), 551- 562 (1991). [ Links ]

27. Wahlberg, B., "System identification using Kautz models", *IEEE Trans. on Autom. Contr.*, **AC-39**(6), 1276-1282 (1994). [ Links ]

28. Wigren, T., "Recursive prediction error identification using the nonlinear Wiener model", *Automatica*, **29**(4), 1011-1025 (1993). [ Links ]

29. Wigren, T., "Convergence analysis of recursive identification algorithms based on the nonlinear Wiener model", *IEEE Trans. on Autom. Contr.*, **39**(11), 2191-2206 (1994). [ Links ]