## Servicios Personalizados

## Revista

## Articulo

## Indicadores

- Citado por SciELO

## Links relacionados

- Similares en SciELO

## Compartir

## Latin American applied research

##
*versión impresa* ISSN 0327-0793

### Lat. Am. appl. res. vol.39 no.3 Bahía Blanca jul. 2009

**Linearly implicit discrete event methods for stiff ODE's.**

**G. Migoni ^{†} and E. Kofman^{†} **

^{†}* CIFASIS-CONICET. Laboratorio de Sistemas Dinámicos FCEIA -UNR. Riobamba 245 bis -(2000) Rosario.*

*Abstract* — This paper introduces two new numerical methods for integration of stiff ordinary differential equations. Following the idea of quantization based integration, i.e., replacing the time discretization by state quantization, the new methods perform first and second order backward approximations allowing to simulate stiff systems. It is shown that the new algorithms satisfy the same theoretical properties of previous quantization-based integration methods. The translation of the new algorithms into a discrete event (DEVS) specification and its implementation in a DEVS simulation tool is discussed. The efficience of the methods is illustrated comparing the simulation of two examples with the classic methods implemented by Matlab/Simulink.

*Keywords* — Stiff System Simulation. Quantization Based Integration. DEVS

**I. INTRODUCTION**

The use of traditional methods (Hairer *et al.*, 1993; Hairer and Wanner, 1991; Cellier and Kofman, 2006) based on time discretization to integrate stiff systems require the use of implicit algorithms since the required step size used by explicit methods is limited by the stability region and the resulting step size becomes inadmissibly small (Cellier and Kofman, 2006).

In fact, numerical integration methods that include in their numerically stable region the entire left half (*λ·h*) plane (or at least a large portion of it) are necessary for stiff systems integration (Cellier and Kofman, 2006). Only some implicit methods have this type of stability region. Explicit algorithms showing that feature do not exist.

The problem with implicit methods is that they are computationally expensive because in each step they need to use iterative algorithms to determine the next value (usually with the Newton Iteration). The problem becomes critical in applications related to real time simulation, where in many cases performing iterations becomes unacceptable.

An alternative approach to classic time slicing started to develop since the end of the 90's, where time discretization is replaced by state variables quantizacrete time but discrete event systems. The origin of this idea can be found in the definition of Quantized Systems (Zeigler *et al.*, 2000).

This idea was then reformulated with the addition of hysteresis -to avoid the appearance of infinitely fast oscillations- and formalized as the Quantized State Systems (QSS) method for ODE integration in (Kofman and Junco, 2001). This was followed by the definition of the second order QSS2 method (Kofman, 2002), the third order QSS3 method (Kofman, 2006).

The QSS methods showed some important advantages with respect to classic discrete time methods in the integration of discontinuous ODEs (Kofman, 2004), sparsity exploitation (Kofman, 2002), the property of absolute stability, and the existence of a global error bound (Cellier and Kofman, 2006).

In spite of these properties, QSS, QSS2 and QSS3 fail when applied to stiff systems due to the appearance of fast oscillations. They remain numerically stable at the expense of utilizing excruciatingly small step sizes, as any explicit algorithm is expected to require in this situation.

To solve this problem, a first order backward QSS method (called BQSS, after Backward QSS) was proposedinMigoni *et al.* (2007). The method was able to efficiently integrate many stiff systems. The basic idea of the BQSS method is to use a future value of the states to obtain the quantized value. Yet, whereas the algorithm is implicit, it still can be implemented without Newton iterations. The reason is that each state has only two possible future values. If a state variable currently assumes a value of qj , the next value of that state variable must be either *q _{j}* + Δ

*q*, or

_{j}*q*− Δ

_{j}*q*. Hence all possible next values can be enumerated without an open-ended search. In other words BQSS was the first explicit method for stiff ODEs.

_{j}The main drawback of BQSS is that it performs only a first order approximation and accurate results cannot be obtained. Another problem is that BQSS introduced an extra perturbation term that increases the error bound and, in some nonlinear systems, might provoke the appearance of spurious equilibrium points.

This paper presents first a new method that combines the idea of BQSS and linearly implicit integration.

The first order accurate linearly implicit QSS (LIQSS) method follows the idea of BQSS, but avoids the presence of the mentioned perturbation term and the appearance of spurious equilibrium point using a linearly implicit idea to find the state values where some derivatives cross by zero. Then, a second order accurate LIQSS method (called LIQSS2) is defined combining the principles of LIQSS and the second order accurate QSS method (QSS2). This new method solves, up to certain limits, the problem of accuracy.

The work is organized in the following way: Section II.recalls the principles of Quantization-Based Integration and introduces the problems of QSS methods to deal with stiff systems. Then, Section III.introduces the new methods, namely, LIQSS and LIQSS2 and discuss the main implementation issues of them. Section IV.studies the properties of the new methods (legitimacy and error bound). Finally, Section V. introduces some simulation examples and Section VI. presents conclusions and future work.

**II. QUANTIZATION-BASED INTEGRATION**

This section introduces the principles of QSS methods, shows the problems they have in the simulation of stiff systems, and presents a first approximation to solve these problems (the BQSS method). Finally, a brief introduction to the DEVS formalism is provided and the DEVS implementation of QSS methods is discussed.

**A. QSS Method**

Consider a time invariant ODE in its State Equation System (SES) representation:

(1) |

where *x*(*t*) ∈ is the state vector and *u*(*t*) ∈ is an input vector, which is a known piecewise constant function.

The QSS method simulates an approximate system, which is called Quantized State System:

(2) |

where *q*(*t*) is a vector of quantized variables which are quantized versions of the state variables *x*(*t*). Each component of *q*(*t*) follows a piecewise constant trajectory, related with the corresponding component of *x*(*t*) by a hysteric quantization function so that:

(3) |

and *q _{j}*(

*t*

_{0}) =

*x*(

_{j}*t*

_{0}). Thus,

*q*(

_{j}*t*) only changes when it differs from

*x*(

_{j}*t*) in ±Δ

*Q*. The magnitude Δ

_{j}*Q*is called quantum.

_{j}**B. QSS and Stiff Systems**

The system

(4) |

has eigenvalues λ_{1} ≈ −0.01 and λ_{2} ≈ −99.99 which means that the system is stiff.

The QSS method approximates this system as

(5) |

Considering initial conditions *x*_{1}(0) = 0, *x*_{2}(0) = 20, and quanta Δ*Q*_{1} = Δ*Q*_{2} = 1, the QSS integration performs the following steps:

In *t* = 0 we set *q*_{1}(0) = 0 and *q*_{2}(0) = 20. Then, = 0.2 and = 20. This situation remains until |*q _{i}* −

*x*| = Δ

_{i}*Q*= 1.

_{i}The next change in *q*_{1} is then scheduled at *t* = 1/0.2 = 5 while the change in *q*_{2} is scheduled at *t* = 1/20 = 0.05.

Thus, a new step is performed in *t* = 0.05. After this step it results *q*_{1}(0.05) = 0, *q*_{2}(0.05) = 21, *x*_{1}(0.05) = 0.01, *x*_{2}(0.05) = 21. The derivatives are (0.05) = 0.21 and (0.05) = −80.

The next change in *q*_{1} is rescheduled at 0.05 + (1 − 0.01)/0.21 = 4.764 while the next change in *q*_{2} is scheduled at 0.05 + 1/80 = 0.0625. Hence, the next step is performed at *t* = 0.0625.

In *t* = 0.0625 it results *q*_{1}(0.0625) = 0, *q*_{2}(0.0625) = *x*_{2}(0.0625) = 20, *x*_{1}(0.0625) ≈ 0.0126 and the derivatives coincide with those of *t* = 0.

This is cyclically repeated until a change in *q*_{1} occurs. That change occurs at *t* ≈ 4.95, after 158 changes in *q*_{2} (which oscillates between 20 and 21).

The simulation continues in the same way. Fig.1 shows the evolution of *q*_{1}(*t*) and *q*_{2}(*t*) through 500 units of simulated time.

**Figure 1**: QSS Simulation

The fast oscillations of *q*_{2} provoke a total of 15995 transitions in that variable, while *q*_{1} only changes 21 times. Consequently, the total number of steps to complete the simulation is greater than 16000 (this number is of the order of the 25000 steps needed by Forward Euler method to obtain a stable result).

Evidently, the QSS method is unable to efficiently integrate System (4).

**C. QSS2 Method**

The second order QBI method uses first order quantization. As it is shown in Figure 2, a first order quantizer produces a piecewise linear output trajectory. Each section of that trajectory starts with the value and slope of the input and finishes when it differs from the input in Δ*Q _{i}*. A formal definition of a first order quantization function can be found in (Kofman, 2002).

First Order Quantizer

**Figure 2**: Trajectories of a first-order quantizer.

The QSS2 method then approximates a system like (1) by (2) but now, the quantized variables *q _{i}*(

*t*) follow piecewise linear trajectories and the state variables

*x*(

_{i}*t*) are piecewise parabolic functions of the time.

Like the first order QSS, QSS2 can be represented by a DEVS model. The advantage of QSS2 is that it permits using a small quantum -i.e., setting a small error tolerance- without increasing considerably the number of calculations. In QSS, the number of steps is inversely proportional with the quantum, while in QSS2 it is only inversely proportional with its square root.

QSS2 also exhibits the problem of fast oscillations in the simulation of stiff systems. For instance, if we use the QSS2 method to simulate System (4), it performs 65467 steps (19 changes in q1 and 65448 changes in *q*_{2}).

**D. BQSS Method**

The BQSS method is similar to QSS, but *q _{i}* is always chosen so that

*x*(

_{i}*t*) goes to

*q*(

_{i}*t*).

Basically, given a state variable *x _{j}*(

*t*), BQSS uses two hysteretic quantization functions: one from below (

__(__

*q**t*) ≤

*x*(

_{j}*t*)) and the other from above ( ≥

*x*(

_{j}*t*)). Both quantization functions are defined so that they never differ from

*x*in more than 2Δ

_{j}*Q*.

_{j}The quantized variable *q _{j}* is chosen equal to either

*or , according to the direction of (*

__q__j*t*). When > 0 we use

*q*= , and viceversa.

_{j}When = *f _{j}*(

**q**,

**u**) depends on

*q*, it could happen that the sign of the derivative changes when we evaluate

_{j}*f*using each possibility, i.e., > 0 and < 0. Thus, we cannot find a correct value for

_{j}*q*.

_{j}However, in that situation, continuity in *f _{j}* ensures that a value exists, with

*< < , such that = 0.*

__q___{j}Thus, the BQSS method sets either *q _{j}* = or

*q*=

_{j}*and enforces the derivative = 0 adding an extra perturbation term Δ*

__q___{j}*f*= .

_{j}Then, given the system Eq.(1), instead of simulating a system like Eq.(2), BQSS simulates a system of the form:

(6) |

where Δ**f**(*t*) is normally zero, except when the situation described above is found (i.e., when we cannot find a correct value for *q _{j}*).

BQSS works fine with most stiff systems. Anyway, the presence of the perturbation term Δ**f**(*t*) increases the error and can cause the appearence of spurious equilibrium points in some nonlinear systems.

Another limitation of BQSS is that it is only first order accurate. We could not find, based on that idea, a second order accurate method.

**E. DEVS Formalism**

A DEVS model (Zeigler *et al.*, 2000) processes an input event trajectory, and, based on that trajectory and the initial state, produces an output event trajectory.

The behavior of an atomic DEVS model is formally defined by the structure:

(7) |

where

*X*is input event set, i.e., the set of all possible input event values.*Y*is the output event set.*S*is the state value set.- δ
_{int}, δ_{ext}, λ and*ta*are the functions that define the model dynamics.

Each possible state *s* (*s* ∈ *S*) has an associated time advance given by the *time advance function* (*ta*(*s*) → ). *ta*(*s*) is a non-negative number indicating how long the system remains in a given state in absence of input events.

Then, if at time *t*_{1} the state takes the value *s*_{1}, after *ta*(*s*_{1}) time units (i.e., at time *t*_{1} + *ta*(*s*_{1})) the system performs an internal transition going to a new state *s*_{2} = δ_{int}(*s*_{1}). Function δ_{int} (δ_{int} : *S* → *S*) is called *internal transition function*.

When the state goes from *s*_{1} to *s*_{2} an output event is produced, with value *y*_{1} = λ(*s*_{1}). Function λ (λ : *S* → *Y*) is called *output function*. Functions *ta*, δ_{int} and λ defined the autonomous behavior of a DEVS model.

When an input event arrives, the state changes instantaneously. The new state depends not only on the input event value but also on the previous state and the elapsed time since the last transition. If the model arrives to state *s*_{3} at *t*_{3} and an input event arrives at time *t*_{3} + *e* with a value *x*_{1}, the new state is calculated as *s*_{4} = δ_{ext}(*s*_{3},*e*,*x*_{1}) (notice that *e* < *ta*(*s*_{3})). In this case, we say that the system performs an external transition. Function δ_{ext} (δ_{ext} : *S* × × *X* → *S*) is called external transition function. During an external transition no output event is produced.

DEVS models can be coupled and the result of the coupling defines an equivalent atomic DEVS model.

**F. DEVS and QSS**

Each component of Eq.(2) can be thought as the coupling of two elementary subsystems. A static one,

(8) |

and a dynamical one

(9) |

where *Q _{j}* is the hysteretic quantization function (it is not a function of the instantaneous value

*x*(

_{j}*t*), but a functional of the trajectory

*x*(·)).

_{j}Since the components *u _{j}*(

*t*) and

*q*(

_{j}*t*) are piecewise constant, the output of Subsystem (8), i.e.,

*d*(

_{j}*t*), will be piecewise constant. In this way, both subsystems have input and output piecewise constant trajectories that can be represented by sequences of events.

Then, Subsystems (8) and (9) define a relation between their input and output sequences of events. Consequently, equivalent DEVS models can be found for these systems, called *static functions* and *quantized integrators* respectively (Kofman and Junco, 2001).

The second order accurate QSS2 method can be implemented in the same way of QSS. However, the trajectories are now piecewise linear instead of piecewise constant. Thus, the events carry two numbers that indicate the initial value and the slope of each segment. Also, the static functions and quantized integrators are modified with respect to those of QSS so they can take into account the slopes.

**III. LINEARLY IMPLICIT QSS METHODS**

In this section, we introduce the new Linearly Implicit QSS (LIQSS) methods of first and second order and we discusses the implementation of them as DEVS models.

**A. First Order LIQSS Method**

The idea of LIQSS is very similar to BQSS. However, when a value *q _{j}* so that

*x*goes to it cannot be found, LIQSS tries to find the value for which = 0 instead of adding the perturbation term Δ

_{j}*f*to enforce that situation.

_{j}In order to illustrate this idea, we shall simulate System (4) from the same initial conditions and quantum than before.

In *t* = 0, we can choose *q*_{2} = 19 or *q*_{2} = 21 according to the sign of (*t*). In both cases, (0) > 0 so the quantized future value of *x*_{1} will be *q*_{1}(0) = 1. On the other hand, if we choose *q*_{2}(0) = 21 then (0) = −180 < 0 and if we choose *q*_{2}(0) = 19, it results (0) = 20 > 0 so there exists a point 19 < (0) < 21 in which (0) = 0. The value for (0) can be calculated (exploiting the linear dependence of with *q*_{2}) as

Then, the state derivatives result: (0) = 0.192, (0) = 0.

The next change in *q*_{1} is scheduled for *t* = 1/0.192 ≈ 5.2083 while the corresponding in *q*_{2} is scheduled for *t* = ∞

Then, the next step takes place in *t* = 1/0.192 ≈ 5.2083. At this time, *x*_{1} = 1 and *x*_{2} = 0. After that, *q*_{1}(5.2083) = 2 (because (5.2083) > 0). If we reevaluate for *q*_{2} = 19 and *q*_{2} = 21 it results lower than zero in both cases, so the correct value is *q*_{2}(5.2083) = 19 because in this way *x*_{2} goes to *q*_{2}.

With these values of *q*_{1} and *q*_{2} the following state derivatives are obtained: (5.2083) = 0.19 and (5.2083) = −80. The next change in *q*_{1} is scheduled to *t* = 1/0.192 + 1/0.19 ≈ 10.47149, while the one in *q*_{2} is scheduled to *t* = 1/0.192 + 1/80 ≈ 5.22083. So, the next step is given in *t* = 5.22083, when *x*_{2} reaches *q*_{2}.

Calculations follow in this way. Figure 3 show the evolution of *q*_{1}(*t*) and *q*_{2}(*t*) through 500 units of simulated time. As it can be seen, in this method the fast oscillations of *q*_{2} are not present. In this way, *q*_{1} changes 21 times and *q*_{2} changes 25 times, which totalizes 46 steps (this constitutes a rather decent result for a stiff system).

**Figure 3**: LIQSS Simulation

**B. LIQSS Definition**

Given System (1), the LIQSS method approximates it by Eq.(2), where each *q _{j}* is defined by the following function

(10) |

with

(11) | |

(12) | |

(13) |

where is equal to **q**(*t*^{−}) except for the *j*-th component, where it is equal to and *A _{j,j}* is the

*j*,

*j*component of the Jacobian matrix evaluated in , i.e.,

(14) |

As we shall see now, when *A _{j,j}* 0, setting

*q*= provokes (in the linear case) the situation = 0.

_{i}**C. Calculation of **

We take **q**^{j}(*t*) equal to , except that the *j*-th component is * q_{j}*. Notice that we use when

*f*changes the sign between

_{j}*and . Thus, an intermediate point exists where*

__q___{j}*f*= 0.

_{j}We take (*t*) equal to , except that the *j*—th component is .

Defining

and the residual

calling the point where *f _{j}* = 0, we can write

Taking into account that (because and only differ in the *j*-th component) and considering that *f _{j}*(,

*u*) = 0, we can solve the previous equations for , obtaining

(15) |

In order to estimate *A _{j,j}* we can use the expression

(16) |

If *f _{j}* depends linearly on

*q*, Eq.(16) gives the exact value of the Jacobian. Moreover, in that case the least term in Eq.(15) results zero and = , i.e., the value of provokes = 0.

_{j}In a nonlinear case, we will have ≈ , and ≈ 0. Although the oscillations will not disappear in this case, they will have a low frequency.

This is analogous to what linearly implicit discrete time methods do by solving the implicit equation only for the linear part of the problem, and this is the reason for calling LIQSS to this method.

**D. DEVS Implementation of LIQSS**

The main difference between LIQSS and QSS is the way in which **q** is obtained from **x**. Eq.(10) says that *q ^{j}* not only depends on

*x*but also on

_{j}**q**.

In QSS, the changes in *q _{j}* are produced when

*x*differs from ir in Δ

_{j}*Q*. Similarly, in LIQSS

_{i}*q*changes when

_{i}*x*reaches

_{j}*q*. However, changes in

_{j}*q*can trigger changes in other quantized variables because of Eq.(10). In the same way, changes in some component of

_{i}*u*can also change some quantized variable.

_{i}Following the idea of the DEVS implementation of QSS, i.e., by the coupling of *quantized integrators* and *static functions*, but taking into account the mentioned differences, we can obtain a DEVS model for the LIQSS algorithm.

The structure of this implementation is shown in Fig. 4.

** Figure 4**: Block Diagram of LIQSS

Since the trajectories of *u _{j}*(

*t*) and

*q*(

_{j}*t*) are piecewise contant, the static funcions are the same than those of QSS.

Then, we only need to define the LIQSS integrators so that they calculate *q _{j}* according to the definition of LIQSS.

In order to build the DEVS model of the LIQSS quantized integrator, we shall first analyze its behavior.

Let us suppose that in time *t* the state *x _{j}* reaches the value of

*q*with a positive slope ((

_{j}*t*

^{−}) > 0). Then, the upper and lower quantization functions and

*must be updated (increasing them by Δ*

__q___{j}*Q*). In this situation we first try with an output value

_{j}*q*(

_{j}*t*) = =

*q*(

_{j}*t*

^{−}) + Δ

*Q*.

_{j}If, due to the feedback, we receive an input event with *d _{j}* = (

*t*) < 0, then we are in the situation where we need to calculate the value that provokes = 0, i.e., we estimate using Eq.(13), that in this case takes the form

and we set *q _{j}*(

*t*) = (supposing that

*A*= 0, otherwise we just use

_{j,j}*q*= ).

_{j}The value of *A _{j,j}* can be easily estimated using the values of (

*t*

^{−}), (

*t*),

*q*(

_{j}*t*

^{−}) and (

*t*).

It could happen that the integrator then receives (also at time *t*) another event with a nonzero value (because of the error in the calculation of ). In this case we shall not calculate any further value for *q _{j}* at time

*t*.

In the opposite case (when *x _{j}* reaches

*q*from above) we proceed in an analogous way.

_{j}The other case in which *q _{j}* changes and the quantized integrator must provoke output events is when a change in the sign of the derivative is received. Suppose that

*x*was going up towards

_{j}*q*= and at time

_{j}*t*(due to the change in some quantized variable or input), an event with

*d*= (

_{j}*t*) < 0 is received.

Then, the integrator must send the new output value *q _{j}*(

*t*) =

*so that*

__q___{j}*x*goes towards

_{j}*q*. In this case, it can also happen that, due to the feedback, a new event with < 0 is received at time

_{j}*t*and we are again in the situation where we need to calculate . In this case, we proceed exactly as before, calculating and ignoring any further change of sign at time

*t*.

As before, the case where *x _{j}* is initially going down to

*q*is completely analogous to the one described above.

_{j}In any situation, after calculating *q _{j}*, it results easy to schedule the next output event time:

The behavior described for the quantized integrator can be easily translated into a DEVS model.

**E. Second Order LIQSS: Basic Idea**

The second order linearly implicit method, called LIQSS2, was developed combining the ideas of QSS2 and LIQSS.

The quantized variables of this new method are piecewise linear instead of being piecewise constant and they are chosen in order to verify , i.e., so that *x _{j}* goes towards

*q*.

_{j}As an example, in Fig. 5 a general trajectory is shown following this idea.

**Figure 5**: LIQSS2 Trajectories

Analogously to the case of LIQSS and BQSS, it can happen that the sign of changes when we start a new segment of *q _{j}*(

*t*). Then, an intermediate slope

*m*exists that makes = 0. In this case, we can also select the initial value

_{j}*q*of the new segment so that =

_{j}*m*, i.e., we can make the state trajectory to run parallel to the quantized trajectory so that no events are generated. Both values,

_{j}*q*and

_{j}*m*, can be easily obtained when depends linearly on

_{j}*q*.

_{j}If we simulate System (4) from the same initial conditions but with quanta Δ*q*_{1} = 0.1 and Δ*q*_{2} = 0.1 (10 times smaller than before), the LIQSS2 method only performs 59 steps (20 changes in *q*_{1} and 39 in *q*_{2}). The simulation results can be seen in Fig. 6

**Figure 6**: LIQSS2 Simulation

**F. LIQSS2 Definition**

Given the system (1), the LIQSS2 method approximates it by (2), where each component *q _{j}* is defined as:

(17) |

with

(18) | |

(19) | |

(20) |

and

(21) |

Note that in Eq.(21), the condition (*t*) · (*t*^{−}) < 0 means that an intermediate value *m _{j}* exists so that = 0. In a linear system this value can be calculated analogously to LIQSS with the expression

*m*(

_{j}*t*) =

*m*(

_{j}*t*

^{−}) − (

*t*

^{−})/

*A*. In a nonlinear case, we shall also obtain an approximate value.

_{j,j}**G. DEVS Implementation of LIQSS2**

The simulation scheme for LIQSS2 is the same than before (Fig. 4), but now the trajectories are piecewise linear an parabolic.

Since the quantized variable trajectories of LIQSS2 are, as in QSS2, piecewise linear, the static functions are the same of QSS2.

The main difference between QSS2 and LIQSS2 is the way in which the quantized state variable trajectories are calculated in the integrator.

Each segment of the quantized variable trajectory can be characterized by an initial point *q _{j}* and a slope

*mq*. In the same way, the state derivative can be characterized by the pair (

_{j}*d*,

_{j}*md*). Thus, each input and output event of the quantized integrator will carry two numbers.

_{j}Let us then analyze the behavior of the resulting DEVS model. Suppose that at time *t* the state *x _{j}* reaches either or

*with (*

__q___{j}*t*

^{−}) > 0. Using the fact that

*x*is piecewise parabolic, we know both (

_{j}*t*

^{−}) and (

*t*

^{−}). Thus, we set the new segments of and

*with slope*

__q___{j}*m*= (

_{q}*t*

^{−}) and initial values

*x*+ Δ

_{j}*Q*and

_{j}*x*− Δ

_{j}*Q*respectively. Then, as (

_{j}*t*

^{−}) > 0 we select

*q*(

_{j}*t*) = (

*t*).

It could happen that, due to the feedback, at time *t* we then receive an event with slope *md _{j}*(

*t*) = (

*t*) < 0. Thus, an intermediate output slope between the old and the new one exists that provokes the situation

*md*= = 0. If

_{j}*A*= 0, this slope can be estimated as

_{j,j}(22) |

We also calculate the value (*t*) that makes *md _{j}* = (

*t*) = (

*t*):

Thus, we update the slopes of * q_{j}* and to the value and we output an event with the pair (,.

Like the case of LIQSS, if we receive another event at time *t* we do not produce any further output event.

Another situation in which *q _{j}* changes is when an event with a change in the sign of the second derivative is received. Suppose that

*x*was moving towards

_{j}*q*with > 0 and at time

_{j}*t*an event is received with

*md*= < 0 (due to the change is some other quantized or input variable).

_{j}Thus, we first try with *q _{j}* =

*(*

__q___{j}*t*) and we update the slope

*mq*= (

_{j}*t*

^{−}). If, due to feedback, we receive another event at time

*t*with

*md*> 0, we must calculate the value that makes

_{j}*md*= 0, and we repeat what we did from Eq.(22).

_{j}Once *q _{j}* is calculated, we must schedule the next event time. The time to the next event is given by the first crossing of

*x*with either or

_{j}*.This can be calculated as the minimum positive solution σ*

__q___{j}_{j}of the following equations

Similarly to the case of LIQSS, *A _{j,j}* can be estimated as:

All these ideas can be easily translated into the DEVS model of the LIQSS2 quantized integrator.

**H. PowerDEVS Implementation**

PowerDEVS (Pagliero *et al.*, 2003) is a software tool for DEVS simulation. It has a graphical editor that permits building block diagrams of DEVS models. PowerDEVS libraries contain all the blocks needed to implement the QSS methods, including quantized integrators, static functions, source terms and blocks for discontinuity handling.

The DEVS models corresponding to both LIQSS methods were added to the generic quantized integrator that implementes the QSS methods. Now, this block permits selecting among the following methods: QSS, QSS2, QSS3, BQSS, CQSS, LIQSS and LIQSS2.

This block also permits selecting the quantum and the initial state value.

Section V.illustrates the use of these new methods in PowerDEVS.

**IV. THEORETICAL PROPERTIES**

We shall treat here the most important properties of the LIQSS methods. We shall show first that the methods perform a finite number of steps in a finite interval of time (this guarantees that the simulation time will always advance). Then we shall analyze the stability and accuracy properties.

**A. Trajectories and Legitimacy of LIQSS **

A crucial requirement of QSS methods is the legitimacy condition, which ensures that a finite number of events occurs in any finite interval of time. The following theorem proves this property for the first order LIQSS method.

**Theorem 1.** *Suppose that function* **f** *in* (1) *is bounded in a domain D × D _{u}, where D ⊂ , D_{u} ⊂ and assume that the trajectory*

**u**(

*t*) ∈

*D*

_{u}is piecewise constant. Then,*Any solution***x**(*t*)*of*(2)*is continuous while***q**(*t*)*remains in D*.*The trajectory***q**(*t*)*is piecewise constant while it remains in D*.

*Proof*. The proof of (1) is straightforward since, according to (2), the derivative of **x** is bounded.

For the item (2), in order to prove that **q** is piecewise constant it is necessary to ensure that it only experiences a finite number of changes in any finite interval of time.

Let (*t*_{1},*t*_{2}) be an arbitrary interval of time in which **q**(*t*) remains in *D*. We shall prove that, within this interval, **q**(*t*) has a finite number of changes.

The assumptions of the theorem ensure that **f**(**q**,**u**) is bounded and, taking into account the relation between *x _{j}* and

*q*, positive constants exist so that, for

_{j}*t*∈ (

*t*

_{1},

*t*

_{2})

Let *t _{c}* ∈ (

*t*

_{1},

*t*

_{2}) and suppose that . According to (12), this situation cannot be repeated until |

*x*(

_{j}*t*) −

*x*(

_{j}*t*)| ≥ Δ

_{c}*Q*. Thus, the minimum time interval between two discontinuities in (

_{j}*t*) is

Then, calling the number of changes of (*t*) in the interval (*t*_{1},*t*_{2}), it results that

Since **u**(*t*) is piecewise constant, it will perform a finite number of changes nu in the interval (*t*_{1},*t*_{2}).

The definition of *q _{j}* ensures that it can only change when (

*t*) changes or when there is a change in some other quantized or input variable (

*q*(

_{i}*t*) or

*u*(

_{i}*t*)) that inverts the sign of .

In conclusion, changes in *q _{j}*(

*t*) are linked to changes in some (

*t*) or

*u*(

_{i}*t*). Thus, the total number of changes will be equal or less than the sum of all the changes in those variables, i.e.,

which is a finite number.

Although this theorem is only valid for the first order LIQSS method, an analog result for LIQSS2 can be obtained combining this proof with that of QSS2 (Kofman, 2002).

**B. Perturbed representation**

The theorical properties of the QSS methods are based in a perturbed representation of the original system (1) that is equivalent to the approximation Eq.(2). Defining Δ**x**(*t*) = **q**(*t*) − **x**(*t*) each row of System (2) can be rewritten as:

(23) |

From (10), (11), (12) and (13) in the definition, it can be ensured that each component Δ*x _{i}*(

*t*) of

**Δx**(

*t*) is bounded by

^{1}

(24) |

where Δ*Q _{i}* is the quantization adopted for

*x*(

_{j}*t*). Thus, the LIQSS methods simulate an approximate system which only differ from the original SES(1) due to the presence of the bounded state perturbation.

**C. Global Error Bound and Stability**

Given the LTI system

(25) |

were *A* is a Hurwitz matrix with Jordan canonical form Λ = *V*^{−1}*AV*, the LIQSS(2) approximation simulates the system

(26) |

Defining the error as *e*(*t*) = *x*(*t*) − *x _{a}*(

*t*), and following the procedure of Kofman (2002) and Cellier and Kofman (2006), it results that

(27) |

where Δ*Q* is the vector of quantum adopted.

The error bound is twice the error bound of QSS, QSS2 and QSS3.

**D. Equilibrium Points**

One of the drawbacks of BQSS was the appearence of spurious equilibrium points. Due to the term **Δf**, Eq.(6) admits equilibrium points also when **f**(**q**,**u**) 0.

However, the only possibility in which LIQSS or LIQSS2 arrive to an equilibrium point is when **f**(**q**,**u**) = 0, i.e., when the quantized variables reach an equilibrium point.

**V. EXAMPLES**

This section is aimed to introduce some examples which show the advantages of the LIQSS methods.

**A. Second Order Linear System**

In Section II the following system was presented:

(28) |

with initial conditions *x*_{1}(0) = 0 and *x*_{2}(0) = 20

The system was first simulated using LIQSS and LIQSS2 methods with quantum Δ*Q _{i}* = 1. Then, the simulations were repeated decreasing the quantum 10, 100 and 1000 times. The following table shows the number of steps performed for each method using the mentioned quantization:

This table shows that the number of steps performed by LIQSS linearly varies with the quantization, while the number of steps in LIQSS2 grows approximately with the square root of the quantum reduction.

Figure 7 shows the simulation results using Simulink ODE15s with tolerance 10^{−3} and PowerDEVS with Δ*Q _{i}* = 0.001 (the difference between both methods cannot be appreciated with the naked eye).

**Figure 7**: Linear Stiff System Simulator

The simulation time could not be evaluated under PowerDEVS using Δ*Q _{i}* = 0.001 (it was too smal in order to be accurately measure). Thus, we compared the simulation times using quantum Δ

*Q*= 0.0001. The simulation in Simulink takes 0.078 seconds (ODE15s, in accelerated mode, with tolerance 10

_{i}^{−3}), while in PowerDEVS it takes 0.015 seconds.

The global error bound -Eq.(27)- ensures that the error in the LIQSS2 simulation is always less than 2 · 10^{−4} in *x*_{1} and 6 · 10^{−4} in *x*_{2}.

Figure (8) shows the simulation error in PowerDEVS with quanta Δ*Q _{i}* = 0.0001. Comparing it with the theoretical bound, we see that the last one is a bit conservative.

**Figure 8**: Error

**B. Van der Pol Oscillator**

The problem consists of a second order differential equation proposed by B. van der Pol in the 1920"s, that describes the behavior of nonlinear vacuum tube circuits. It has two periodic solutions, the constant solution, *z*(*t*) = 0, which is unstable, and a nontrivial periodic solution that corresponds to an attractive limit cycle. The equation depends on a parameter that weights the importance of the nonlinear part of the equation. The corresponding state equations are:

(29) |

We fixed the parameter μ = 1000, what gives rise to a stiff problem that is often used as a test problem for stiff ODEs solvers (Enright and Pryce, 1987).

The model was then built in PowerDEVS (Fig.9)

**Figure 9**: PowerDEVS model of the Van der Pol oscillator

For the simulation, we used initial conditions *x*_{1}(0) = 2, *x*_{2}(0) = 0 and quantization Δ*Q*_{1} = 0.001, Δ*Q*_{2} = 1. We simulated the system with LIQSS2 until *t _{f}* = 4000. The results are shown in Figure 10. The total number of steps was 2159 (838 in

*x*

_{1}and 1321 in

*x*

_{2}). The simulation takes 0.031 seconds.

**Figure 10**: Van der Pol oscillator simulation

The same system was simulated using different Matlab/Simulink methods. The best results were obtained with ODE15s. In order to obtain similar results to the ones given by PowerDEVS the tolerance error must be set to 10^{−14 }. Using larger tolerances the results were qualitatively similar but with a significant phase error. In this case, the number of steps was 2697 and the simulation takes 0.056 seconds.

It must be taken into account as we compare the results, that the order of LIQSS2 is smaller that the one of ODE15s.

Finally, the system was simulated again using LIQSS2 but with quanta Δ*Q*_{1} = 0.0001 and Δ*Q*_{2} = 0.1 (ten times smaller). The number of steps performed result 4148(1975 in *x*_{1} and 2173 in *x*_{2}) and the simulation takes 0.047 seconds. Comparing this result with the previous one, it can be seen that the number of steps was increased just in a factor of 2 while the quantized levels were reduced in a factor of ten. This put in evidence the second order nature of the LIQS2 method

**VI. CONCLUSIONS**

We presented two novel QBI methods that are able to integrate stiff systems based on linearly implicit principles. The fact that the methods do not call for iterations permits achieving an important reduction of computational costs when compared with traditional implicit discrete time methods.

We showed that these methods, satisfy the global error bound property of QSS method.

We showed that these methods, satisfy the global error bound property of QSS method.

The methods improve the performance of BQSS by solving the problem of the spurious equilibrium points and by increasing to 2 the order of accuracy.

Future work must be done in order to develop higher order methods (following the idea of QSS3 for instance). It is also important to establish which kind of stiff system can be efficiently simulated with LIQSS methods.

^{1} The symbols |·| and "≤" denote the componentwise module and inequallity, respectively.

**REFERENCES**

1. Cellier, F. and E. Kofman, *Continuous System Simulation*, Springer, New York (2006). [ Links ]

2. Enright, W. H. and J. D. Pryce, "Two (fortran) packages for assessing initial value methods," *(ACM) Transactions on Mathematical Software*, **13**, 1&-27 (1987). [ Links ]

3. Hairer, E., S. Norsett, and G. Wanner, *Solving Ordinary Differential Equations I. Nonstiff Problems*, Springer, 2nd edition (1993). [ Links ]

4. Hairer, E. and G. Wanner, *Solving Ordinary Differential Equations II. Stiff and Differential-Algebraic Problems*, Springer, 1st edition (1991). [ Links ]

5. Kofman, E., "A Second Order Approximation for DEVS Simulation of Continuous Systems," *Simulation*, **78**, 76-89 (2002). [ Links ]

6. Kofman, E., "Discrete Event Simulation of Hybrid Systems," *SIAM Journal on Scientific Computing*, **25**, 1771-1797 (2004). [ Links ]

7. Kofman, E., "A Third Order Discrete Event Simulation Method for Continuous System Simulation," *Latin American Applied Research*, **36**, 101-108 (2006). [ Links ]

8. Kofman, E. and S. Junco, "Quantized State Systems. A DEVS Approach for Continuous System Simulation," *Transactions of SCS*, **18**, 123-132 (2001). [ Links ]

9. Migoni, G., E. Kofman, and F. Cellier, "Integración por Cuantificación de Sistemas Stiff," *Revista Iberoam. de Autom. e Inf. Industrial*, **4**, 97-106 (2007). [ Links ]

10. Pagliero, E., M. Lapadula, and E. Kofman, "PowerDEVS. Una Herramienta Integrada de Simulación por Eventos Discretos," *Proceedings of RPIC'03,*, San Nicolas, Argentina **1**, 316-321 (2003). [ Links ]

11. Zeigler, B., T. Kim, and H. Praehofer, *Theory of Modeling and Simulation*, Academic Press, New York, 2nd edition (2000). [ Links ]

**Received: October 18, 2007. Accepted: March 1, 2009. Recommended by Guest Editors D. Alonso, J. Figueroa, E. Paolini and J. Solsona.**