## Servicios Personalizados

## Revista

## Articulo

## Indicadores

- Citado por SciELO

## Links relacionados

- Similares en SciELO

## Compartir

## Latin American applied research

##
*versión impresa* ISSN 0327-0793

### Lat. Am. appl. res. v.35 n.2 Bahía Blanca abr./jun. 2005

**Sampled-data minimum variance filtering**

**L. D. Hernández and R. H. Milocco**

*Grupo de Control Automático y Sistemas. Facultad de Ingeniería, U.N.Comahue Buenos Aires 1400, 8300 Neuquén, Argentina henaleo@hotmail.com, milocco@uncoma.edu.ar*

*Abstract* ¾ This paper deals with the optimal solution to the sampled-data minimum variance filtering problem for linear systems with noise in the states and in the measurements. The solution is derived in the time-domain by using a fast sampling zero-order hold input discretization of the continuous time systems together with a lifting technique. The original sampled-data system is transformed into an equivalent LTI discrete-time system with infinite-dimensional input-output space. However, the designed filter is finite-dimensional. We derive both the existence conditions and the explicit expression of the desired filter and provide an illustrative numerical example.

*Keywords* ¾ Filtering. Lifting. T-Periodic Systems. Sampled-Data.

**I. INTRODUCTION**

Minimum variance filtering problems have been extensively studied via both the state space (Kalman 1960; Anderson and Moore, 1975; Shaked 1976) and the polynomial system approach (Wiener 1950; Roberts and Newmann, 1987; Ahlén and Sternad, 1991; and Grimble 1995). The design techniques are based on continuous-time (ct) or discrete-time (dt) system descriptions. However, in most applications a more realistic situation is that in which a digital filter must be designed to interact with ct systems. In such cases, the estimated signal is formed by the output of the dt filter through a zero-order hold device. The goal is to match the piecewise estimations to the desired ct signal. In general, there are two classic approaches used to design the corresponding dt filter. The first approach consists of discretizing the ct system and designing a dt filter. In the second one, a digital implementation of the -optimal- filter obtained by a ct design is performed. Due to the intersample behaviour of the ct systems in both approaches, there is a serious performance degradation when the sampling is not fast enough. There is a third approach, called sampled-data design, in which the dt filter is designed taking into account the dynamics of the ct systems involved. The recent trends, such as techniques based on linear systems with jumps (Sun *et al.* 1993), and the lifting technique (Bamieh *et al.*, 1991 and Hara *et al.*, 1997), have been used to direct sampled-data design. Although these designs have been extensively used in feedback control systems, filters design received too little attention in spite of its importance in signal processing applications. Filtering sampled-data design has been investigated in the context of H_{¥} in Sun *et al.* (1993) and Kabamba *et al.* (1993), and in the H2 context in Milocco and De Doná (1996), Wang *et al.* (2001), and Milocco and Muravchik (2003). In Wang et al.(2001), a filtering sampled-data design based on the Error Covariance Assignment criterium is proposed, while in Milocco and De Doná(1996) and Milocco and Muravchik (2003), a frequency domain approach to MIMO linear filter design for sampled-data system is presented by using a polynomial approach. In this paper, we extend the results obtained in Milocco and De Doná (1996) to design MIMO sampled-data filters in the time-domain. The proposed solution allows us to obtain the sampled-data minimum variance estimation of the states as well as optimal solutions for minimum variance sampled-data filtering problems such as de-convolution, prediction and smoothing.

The paper is organized as follows: In section II, we provide a suitable description of the multivariable sampled-data system, i.e. the ct subsystem followed by a sampling stage at intervals of T sec., the dt subsystem or filter, and a holding device. Such systems can be represented with the help of *T*-periodic linear time-invariant systems. By means of a fast sampling zero-order hold input discretization of the ct systems together with a lifting technique, the original sampled-data system is transformed into an equivalent LTI dt system with infinite-dimensional input-output space. The cost to be minimized is defined as the averaged energy of the weighted output-error vector. In section III, the matrices of the dt filter state space representation are obtained such that the cost is minimized. In section IV, an example to illustrate the procedure is provided and finally, in section V, we present the conclusions.

**II. PROBLEM FORMULATION**

Consider the following ct time-invariant generalized system:

(1) |

*A*, *B*, *C*_{1}, *C*_{1}, and D are known constant matrices, and η_{1}(*t*) and η_{2}(*t*) are ct white noise input vectors possibly correlated-affecting the states (*x*(*t*)) and the measured output (*y*(*t*)) respectively; *z*(*t*) is the desired output. By sampling the signal *y*(*t*), our aim is to design a dt filter to produce, through a zero-order hold, a piecewise-continuous signal (*t*) to match the desired signal *z*(*t*) in the sense of minimum variance. The flowgraph of the sampled-data filtering problem is depicted in Fig. 1.

Figure 1: Filtering problem design.

The signal (*t*) is the zero-order hold output of the dt time-invariant filter driven by samples *y _{s}*[

*kT*] of the signal

*y*(

*t*) -with sampling period

*T*. We call an ideal sampler (with period

*T*) and , a zero-order hold (with period

*T*). The sampling operator maps the vector space of piecewise-continuous functions to the space of the sequences defined on the set of integers, negative and positive , and is defined as

. (2)

The hold operator maps to via

. (3)

and are synchronized and provide the interface between the digital and the analog parts of the system. We call = the sampled-data filter. Note that is a ct linear time-varying *T*-periodic operator that turns the complete filtering setup of Fig. 1 also into a linear time-varying *T*-periodic operator that we call . A *T*-periodic system is one whose response to an input delayed by exactly *T*-sec. is obtained by delaying the original output exactly by *T*-sec. If φ(*t*, τ) denotes the response of a time varying *T*-periodic linear system at time *t* when a vector of Dirac delta functions is applied to the input at time τ, then this impulse response satisfies φ(*t* + *T*, τ ) = φ(*t*, τ - *T*).

The time-varying *T*-periodic variance of the error (*e*(*t*)) between the desired (*z*(*t*)) and estimated ((*t*)) signals (*e*(*t*)= *z*(*t*) - (*t*)), when the input of the *T*-periodic operator is driven by the input vector η = [ ]* ^{T}*, is given by {

*e*(

*t*)

*e*

*(*

^{T}*t*)}, where means expectation. We define the cost function to be minimized with the sampled-data filter as the error variance averaged over the interval

*T*as follows:

. (4) |

Let φ(*t*, τ) denote the impulse response of . Thus, the error is given by

(5) |

and its covariance matrix becomes

(6) |

Since {η(τ_{1})η* ^{T}*(τ

_{2})} =

*I*δ(τ

_{1}- τ

_{2}) the cost (4) can be written as

. (7) |

Assuming the *T*-periodic operator S is band-limited or equivalently, that the power spectral density of the error decreases to zero as the frequency increases towards infinity, we show that the cost function can be approximated by a new cost associated to the zero-order hold discretized systems with small sampling period *T _{s}*. To this end, we follow the same line as in Hara

*et al.*(1997). First, consider the small sampling period

*T*=

_{s}*T*/, where

*N*is an integer. Then, the cost in (7) can be approximated by

. (8) |

Let φ* _{s}*(

*k*,

*l*) denote the impulse response of the fast sampling zero-order hold input discretization of . It is related to φ(

*t*, τ) by

. (9) |

In the limit, as *T* is fixed and ® ¥, the following approximation is valid:

φ* _{s}*(

*k*,

*l*) =

*T*φ(

_{s}*kT*). (10)

_{s}, lT_{s} By using φ* _{s}* (

*k, l*) in (8), we obtain

. (11) |

For a given sampling time *T*, equality holds for ® ¥.

In order to have a working expression of the approximated cost (11), we are interested in transforming the fast zero-order hold input discretization of the *T*-periodic system into an equivalent LTI dt system. To this end, note that the following equality holds for the impulse response of the fast sampled approximation φ* _{s}*(

*k, l*):

Then, the impulse response φ* _{s}*(

*k, l*) of the time-varying system can be grouped into blocks forming the impulse response of an augmented dimension time invariant-system. The impulse response (

*h*) for

*h = k - l*of such time-invariant system is given by

where each element * _{r,s}*(

*h*) of (

*h*) represents the

*r-th*

output at time *kT*, due to an impulse at time *hT* = 0 in the *s-th* input component of the × multivariable operator . Notice that is the lifted construction of the fast sampled dt hold input approximation of the periodical system . By using the impulse response of the lifted system, the following relationship is obtained:

From the equality above, it is clear that the following holds:

(12) |

Note that an equivalent interpretation of *J _{N}*(

*K*) is given by

, (13) |

where (*h*) is the output of the fast sampled zero-order hold and lifted system driven by a white noise sequence with identity covariance matrix, and *J _{N}*(

*K*) is the sum of the variance of each of the elements in (

*h*). Having obtained a working expression of the cost we minimize

*J*(

_{N}*K*) instead of

*J*(

*K*). Thus, in order to obtain (

*h*), we need the lifting version of the fast sampled dt zero-order hold input approximation of the periodical system .

Assume the zero-order hold input discretization with sampling time *T _{s}* of the ct system (1), together with the low-pass antialiasing filter, is given by

(14) |

where η[*k*] is a white noise sequence vector with covariance matrix E{η[*k*]η* ^{T}*[

*k*]} =

*I*and the constant matrices Φ, Γ,

*L*1, and

*L*2 are known. Our purpose is to lift the system from sampling time

*T*to

_{s}*T*. This lifting operation can be formulated by means of a block implementation as used, for instance, in Keller and Anderson (1992). The transformed signal (

*k*

*T*) is defined by

_{s} [*k**T _{s}*)] = [η

*[*

^{T}*k*

*T*] η

_{s}^{T}[(k + 1)

*T*] ... .........η

_{s}*[(*

^{T}*k*+ - 1)

*T*]]

_{s}*.*

^{T} Note that the new vector signal (*kT*) corresponds to a compound signal with sampling period *T*. The fast system *S _{d}* can be lifted to the system

*, which maps input blocks (*

_{d}*kT*) to output blocks (

*kT*) and (

*kT*). This lifted system

*is associated with the sampling time*

_{d}*T*and is given by

(15) |

The structure of the matrices is as follows:

. (16) |

Considering the dt lifted version of the system, the sampled output is given by *y _{s}*[

*kT*)] = [

*kT*)], which corresponds to the first

*n*elements of the vectorized signal [

*k*

*T*], where n is the number of ct measured signals. Then, the sampler is represented by the

_{s}*n*×

*n*matrix = [

*I*0 ... 0]. The discretized hold element produces a sequence of pulses equal to each filter output. Thus, it can be represented by a matrix of dimension

_{n}*p*×

*p*given by = [

*I*...

_{p}*I*]

_{p}^{T}, where

*p*is the number of the filter outputs. Finally, taking into account that

_{1}= 0, the problem design can be stated as follows: Given the fast sampled and lifted system

(17) |

where *L*_{1} = _{1}, find the stationary dt filter given by

(18) |

such that the cost *J _{N}* (

*K*) given by

, (19) |

where = [*kT*] - [*kT*] is minimized. In the next section we derive the optimal constant matrices Φ* _{f}* , Γ

*,*

_{f}*L*, and

_{f}*D*of

_{f}*K*.

**III. MINIMUM VARIANCE FILTER DESIGN**

Using the state space representation of the dt lifted system in (17) and the dt filter *K* in (18), the state space equations for the estimation error [*k*] are

(20) |

where the sampling time T is dropped from now on and , , , , and are given by

and = _{2}. (21)

Then, the stationary covariance matrix of is

. (22) |

By replacing [*k*] of equation (20) in equation (22) and (19), it is easy to see that the stationary covariance matrix and the cost *J _{N}*(

*K*) fulfil

*P* = *P* * ^{T}*+

*(23)*

^{T} *J _{N}*(

*K*)=

*tr*{

*P*

*+*

^{T}*}. (24)*

^{T} Note that the covariance matrix *P* does not depend on *L _{f}* nor on

*D*. Then, in order to obtain the optimal values of Φ

_{f}*, Γ*

_{f}*,*

_{f}*L*, and

_{f}*D*, we start by minimizing the cost

_{f}*J*(

_{N}*K*) with respect to

*L*. To this end, let

_{f}*L*be all possible matrices of constants parameterized as

_{f}*L*=

_{f}*L*+ , where is a small scalar; , an arbitrary constant matrix; and

_{fo}*L*, the optimal matrix which minimizes the cost

_{fo}*J*(

_{N}*K*) in (24). No matter which is chosen, the minimum of the cost function is obtained for the choice of the scalar value = 0. The necessary condition for the minimum is given by and the sufficient condition is . The last condition is always satisfied (due to the convexity of

*J*(

_{N}*K*)). By replacing given by (21) in (24) and by using

*L*=

_{f}*L*+ , the following optimal value of

_{fo}*L*is obtained by deriving the cost with respect to and equating the result to zero:

_{f}. (25) |

By using (25), we can rewrite as

. (26) |

where

*a* = _{2} - *D _{f}L*1 . (27)

Then, the cost *J _{N}*(

*K*) in (24) can be written as

(28) |

where

. (29) |

We can rewrite the cost in a much more suitable form as

(30) |

Since *P*_{1}, a, and _{2} do not depend on Φ* _{f}* and Γ

*, we are able to find the minimum of the cost with respect to Φ*

_{f}*and Γ*

_{f}*by just minimizing the first term. Thus, we rewrite the first term as*

_{f}(31) |

Taking into account that the covariance matrix of the error in the states () is

, (32) |

where = *x* - we will show that the minimum of (31) is reached by minimizing in the positive definite sense. In the minimum, the estimation error is uncorrelated with the estimation of *x*. This is the optimal error that can be achieved for the covariance matrix since otherwise, if the estimation error was correlated with , a further decrease in could be achieved. Then, (* ^{T}*) = 0 holds, which leads to the following equality:

*P*

_{12}= (

*x*

*)= (*

^{T}*) + (*

^{T}*) Þ*

^{T}*P*

_{12}=

*P*

_{2}. By using this equality, equation (31) is written as

. (33) |

Since the term in (33) is positive definite, its optimum value is given by the minimum of . Then, we need to find the minimum of with respect to Φ* _{f}* and Γ

*, subjected to the bias constraint () = 0. The optimal stationary solution to this problem is given by the stationary dt Kalman filter (Anderson and Moore, 1975) according to the following equations:*

_{f}(34) |

The optimal value of *D _{f}* still remains to be obtained. To this end, we replace (33) in (30), which gives

(35) |

In the same way as with *L _{fo}*, we obtain the optimum

*D*by using the parametrization

_{fo}*D*=

_{f}*D*+ . By deriving the cost with respect to ε, and equating to zero, the optimal value is

_{fo}. (36) |

The optimal stationary dt filer *K* is given by (18), where Φ* _{f}*, Γ

*, and*

_{f}*D*are given by (34) and (36).

_{f}*L*from (25) is given by

_{f}. (37) |

The expression of still remains to be derived. This can be made by using (32) with *P* defined by (23). By means of the expressions of Φ* _{f}* and Γ

*, the following dt algebraic Riccati equation (DARE) is achieved:*

_{f}. (38) |

Then, the minimum of the cost function is given by (35) together with the solution of (38).

**IV. EXAMPLE**

Consider the following scalar prediction problem depicted in Fig. 2. The prediction of the continuous time signal *z*(*t*) at time *t* + 1 *sec.* is desired. The signal *z*(*t*) is the output of the ct system *S*_{1}. The measured output *y _{s}*(

*t*) is the signal

*z*(

*t*) corrupted by additive noise

*v*(

*t*).

*z*(

*t*) is the output of the ct system

*S*

_{2}. Both systems are excited by the inputs η

_{1}(

*t*) and η

_{2}(

*t*), which are uncorrelated white noise processes with spectrum amplitudes α

^{2}and β

^{2}respectively. The predictor is a digital ensemble that operates at a sampling rate of

*T*= 1

*sec.*. In Fig. 2, (

*t*+

*T*|

*t*) means the prediction of

*z*(

*t*) at time

*t*+ 1, taking into account measurements until time

*t*. The same consideration is made for the discrete signal (

*k*+ 1|

*k*). This prediction problem design can be formulated as in Fig. 3, which is similar to the assemble in Fig. 1.

Figure 2: Prediction problem set-up.

Figure 3: Equivalent representation of the prediction problem set-up.

In the example, systems *S*_{1} and *S*_{2} are given by the transfer functions

(39) |

The optimal sampled-data predictor *K*, which minimizes the predictor error variance, is obtained by performing the following steps:

- Obtain the zero-order hold discretization of ct systems
*S*_{1}and*S*_{2}at sampling time*T*=_{s}*T*/ with*T*= 1*sec*and an arbitrary value of*N*. - Build the state equations of both systems including the delay and express it as in equation (14). Note that the number of states increases from four to five.
- Lift the dt system and apply the operator to obtain the state space representation as in (17).
- 4. Obtain the value of using the DARE equation (38).
- Compute the cost
*J*(_{N}*K*) of equation (35). - Compare the cost obtained in the former step with respect to the previous iteration. If there exists no significant difference, then the optimal filter approximation has been obtained. Otherwise, increase the value of and return to step 1.

For different values of ω* _{o}* (the natural frequency of the systems), the optimal discrete predictor was designed by following the steps previously described. Optimal dt filters of first order-degree are obtained for every ω

*-keeping the sampling rate*

_{o}*T*=1

*sec*constant. The cost

*J*(

_{N}*K*) tends towards the minimum as increases. In our example, the cost obtained with = 20 constitutes, in fact, an approximation to the optimal solution with an error lower than 1%. In Figs. 4 and 5, the minimum of the cost obtained versus the natural frequency ω

*is shown.*

_{o} In order to asses the performance of the proposed design method, the cost without predictor (*K* = 0), the cost with = 1 -which corresponds to the standard discrete time design-, and the cost obtained by means of the optimal ct design are depicted. The ct filter design was achieved by using a 10th Padé delay approximation. By increasing the order of the Padé approximation, no significant improvements were made. The different costs were obtained in the case where the noise amplitudes are α = 1 and β = 0.01, and they are depicted in Fig. 4. In Fig. 5, the same costs are shown when both α and β are equal to 1.

The example shows that considerable improvements can be made with respect to the classical discrete time design ( = 1) by using the proposed sampled-data design method. The differences in cost among the different designs become less significant as the natural frequency decreases. This is obvious since the minimum of the cost obtained by means of the optimal ct filter is equivalent to the discrete one as *T*ω* _{o}* tends to zero. Fewer improvements are obtained in the case of higher noise power (Fig. 5). The filter parameters in the case of Fig. 4 when ω

*= 0.6 are the following:*

_{n}(40) |

**V. CONCLUSIONS**

A sampled-data minimum variance filter design problem enclosing deconvolution, prediction, and smoothing has been presented. The optimal solution can be found by an approximation method based on lifted systems. The optimal filter is designed by using the solution of the algebraic Riccati equation. Although the original sampled-data system is transformed into an equivalent LTI dt system with infinite-dimensional input-output space, the Riccati equation and the designed filter are finite dimensional.

Our design requires the solution of a dt algebraic Riccati equation instead of the two diophantine equations and the spectral factorization used in the polynomial approach design as in Milocco (1996). There exist standard algorithms in Arnold and Laub (1984) to solve the DARE efficiently.

An example of sampled data prediction has been presented to show the design procedure. In the example, under certain sampling conditions, improvements up to 20% with respect to the classical discrete time solutions were obtained without increasing the filter structure complexity.

Figure 4: Cost versus ω* _{o}* with α = 1 and β =0.01 (1)-Cost without predictor K=0; (2)-Cost using the dt solution; (3)-Cost using the sampled-data design; and (4)-Cost using the ct design.

Figure 5: Cost versus ω* _{o}* with α = 1 and β = 1 (1)-Cost without predictor K=0; (2)-Cost using the dt solution; (3)-Cost using the sampled-data design, and (4)-Cost using the ct design.

The filter obtained is optimal when the ct systems are linear and time invariant. However, the design can be extended to deal with time-variant system. In such case, instead of using the expressions of the linear time invariant lifted system given in (16), equivalent expressions for linear time-variant lifted systems should be used. The approach can also be extended to deal with uncertain systems as in Milocco and Muravchik (2003).

**Acknowledgments**

This work was supported by the Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), and Universidad Nacional del Comahue (UNC), both of República Argentina.

**REFERENCES**

1. Ahlén, A. and M. Sternad, "Wiener filter design using polynomial equations", *IEEE Trans. Signal Proc.*, **39**, no. 11, 2387-2399, (1991). [ Links ]

2. Anderson, B. D. O. and J. B. Moore, *Optimal Filtering*, Prentice-Hall, Englehood Cliff, New Jersey, (1975). [ Links ]

3. Arnold, W. F. , and A. J. Laub, "Generalized Eigen-problem Algorithms and Software for Algebraic Riccati Equations", *Proc. IEEE*, **72**, 1746-1754, (1984). [ Links ]

4. Bamieh, B. J. Pearson, B. Francis, and A. Tannenbaum, "A lifting technique for linear periodic system with application to sampled-data control", *Syst. Contr. Lett.*, **17**, 78-88, (1991). [ Links ]

5. Grimble, M. J. "Polynomial systems approach to optimal linear filtering and prediction", *Int. J. Control*, **41**, 1545-1564, (1995). [ Links ]

6. Hara, S. , B. D. O. Anderson and H. Fujioka, "Relating H_{2} and H_{¥} norms bounds for sampled-data systems", *IEEE Trans. Automat. Contr.*, **42**, 858-863, (1997). [ Links ]

7. Kabamba, P. T. and S. Hara, "Worst case analysis of sampled-data control systems", *IEEE Trans. Automat. Contr.*, **38**, no. 9, 1337-1357, (1993). [ Links ]

8. Kalman, R. E. "A new approach to linear filtering and prediction problems", *ASME Trans. Journal of Basic Engineering*, **82D**, 35-45, (1960). [ Links ]

9. Keller, J. and B. Anderson, "A new approach to the discretization of continuous-time controller", *IEEE Trans. Automat. Contr.*, **37**, 214-223, (1992). [ Links ]

10. Milocco, R. and J. De Doná , "Optimal parallel model solutions using the polynomial approach", *Int. J. Control*, **65**, no. 4, 681-697, (1996). [ Links ]

11. Milocco, R. and C. Muravchik, "H2 Optimal linear robust sampled-data filtering design using polynomial approach" *IEEE Trans. Signal Proc.*, **51**, 1816-1824, (2003). [ Links ]

12. Roberts, A. and M. Newmann, "Polynomial approach to Wiener filtering", *Int. J. Contr.*, **45**, 1953-1959, (1987). [ Links ]

13. Shaked, U. "A generalized transfer function approach to linear stationary filtering and steady-state optimal control problems", *Int. J. Control*, **29**, 741-770, (1976). [ Links ]

14. Sun, W. , K. Nagpal and P. Khargonekar, "H_{¥} control and filtering for sampled-data systems", *IEEE Trans. Automat. Contr.*, **38**, no. 8, 1162-1175, (1993). [ Links ]

15. Wang, Z., Huang B. and P. Huo, "Sampled-data filtering with error covariance assignment" *IEEE Trans. Signal Proc.*, **49**, no. 3, 666-670, (2001). [ Links ]

16. Wiener, N. *Extrapolation, Interpolation and Smoothing of Stationary Time Series*, The Technology Press and Wiley, New York, (1950). [ Links ]