SciELO - Scientific Electronic Library Online

 
vol.36 número2Centralized formation control of non-holonomic mobile robotsTeleoperation of mobile robots índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

  • Não possue artigos citadosCitado por SciELO

Links relacionados

  • Não possue artigos similaresSimilares em SciELO

Compartilhar


Latin American applied research

versão impressa ISSN 0327-0793

Lat. Am. appl. res. v.36 n.2 Bahía Blanca abr./jun. 2006

 

Stable AGV corridor navigation based on data and control signal fusion

C. Soria1, E. Freire2 and R. Carelli1

1 Instituto de Automática - Universidad Nacional de San Juan
Av. San Martín Oeste, 5400 San Juan, Argentina
{rcarelli, csoria}@inaut.unsj.edu.ar

2 Centro de Ciências Exatas e Tecnologia - Universidade Federal de Sergipe
Av. Marechal Rondon S/N, Jardim Rosa Elze, São Cristovão/SE, Brasil

Abstract — This work presents a control strategy for mobile robots navigating in corridors, using the fusion of the control signals from two redundant or homogeneous controllers: one based on optical flow calculation and the other based on the estimates of position of the robot with respect to the centerline of the corridor, which is estimated using data from ultrasonic and vision sensors. Both controllers generate angular velocity commands to keep the robot navigating along the corridor, compensating for the dynamics of the robot. The fusion of both control signals is done through a Decentralized Information Filter. The stability of the resulting control system is analyzed. Experiments on a laboratory robot are presented to show the feasibility and performance of the proposed control system.

Keywords — Sensor Fusion. Mobile Robot. Artificial Vision. Nonlinear Control.

I. INTRODUCTION

A main characteristic of Autonomous Navigation is its capability of capturing environmental information through external sensors, such as vision, distance or proximity sensors. Although distance sensors (e.g., ultrasound and laser types), which allow to detect obstacles and measure distances to walls and obstacles near the robot, are the most commonly used sensors, vision sensors are increasingly being used because their capability to provide more information.

When autonomous mobile robots navigate within indoor environments (e.g., public buildings or industrial facilities) they should be able to move along corridors, turn at corners and enter/exit rooms. Regarding motion along corridors, some control algorithms have been proposed in various works. In Bemporad et al. (1997), a globally stable control algorithm for wall-following based on incremental encoders and one sonar sensor is developed. In Vasallo et al. (1998), image processing is used to detect perspective lines and to guide the robot following the corridor centerline. This work assumes an elementary control law and does not prove control stability. In Yang and Tsai (1999), ceiling perspective lines are employed for robot guidance, but it also lacks a demonstration on system stability. Other authors have proposed to use the technique of optical flow for corridor centerline guidance. Some approaches incorporate two video cameras on the robot sides, and the optical flow is computed to compare the apparent velocity of image patterns from both cameras (Santos-Victor et al., 1995). In Dev et al. (1997a), a camera is used to guide a robot along a corridor centerline or to follow a wall. In Servic and Ribaric (2001) perspective lines are used to find the absolute orientation within a corridor. In Carelli et al. (2002) the authors have proposed the fusion of the outputs of two vision-based controllers using a Kalman Filter in order to guide the robot along the centerline of a corridor. One of the controllers is based on optical flow, and the other is based on the perspective lines of the corridor. This work presents a stability analysis for the proposed control system.

In general, the works previously cited have not included a stability analysis for the control system. On the other hand, the performance of the control system depends on environment conditions such as illumination, surface textures, perturbations from image quality loss, and other factors, which may render an individual controller performance unaceptable. A solution for this problem is to consider fusion from multiple controllers each of them, based on different sensing information. Although having the same control objectives, the controllers can be coordinated using the concept of behavior coordination (Pirjanian, 2000). With this concept, the command fusion schemes accept a set of behavior instances that share the control of the entire system at all times.

Command fusion schemes can be classified into four categories: voting (e.g. DAMN (Rosenblatt, 1997)), superposition (e.g. AuRA (Arkin and Balch, 1997)), Multiple Objective (e.g. Multiple Decision-Making Control (Pirjanian, 2000)) and Fuzzy Logic (e.g. Multivaluated Logic Approach (Saffiotti et al., 1995). Another example of a command fusion strategy is the dynamic approach to behavior-based robotics (Bicho, 1999). In this paper we consider the command fusion structure previously proposed by the authors in Freire et al. (2004).

The present work is a continuation of Carelli et al. (2002). There, two redundant vision-based control algorithms were used, one of them based on optical flow calculation and the other based on the perspective lines of the corridor. In the present work, the last controller was replaced by one which finds the perspective lines of the walls meeting the floor and fuses this information with the data obtained from ultrasonic sensors to estimate the robot position with respect to the centerline of the corridor. Based on this information, a controller is used to generate the angular velocity command for the robot. The linear velocity of the robot may either be kept constant or be controlled, in order to achieve a smooth and cautious navigation. This configuration is redundant, because both controllers have the same control objective. They are based, however, on different principles, which turn difficult their fusion at the measurement level. Here we propose a fusion of both commands to attain a control signal that allows a robust navigation along corridors. For fusion we employ a control architecture via control output fusion, as proposed in Freire et al. (2004), employing a Decentralized Information Filter – DIF – that minimizes the uncertainty level in both controllers. This uncertainty is evaluated in terms of the measurement errors and the environment conditions by means of a covariance function for each controller. A stability analysis of the resulting control system is presented as well. The work also includes experimental results on a Pioneer 2 DX laboratory robot navigating through the corridors at the Institute of Automatics, National University of San Juan, Argentina.

II. ROBOT AND CAMERA MODELS

A. Robot Model

Figure 1 represents the coordinate systems associated to the robot and the environment: a world system [W], a platform system [R] fixed to the robot and a sensor system [C] fixed to the vision camera. Considering Fig. 1, the kinematics model of a unicycle type robot can be expressed as (Dixon et al., 2001),

, (1)

where ω is the angular velocity and v the linear velocity of the robot, , .


Fig. 1. Coordinate systems.

In order to compensate for vehicle dynamics, the dynamic model of the robot was obtained experimentally by step command response analysis. Of particular interest is the model relating ωR → ωy, where ωR is the reference angular velocity generated by the controller and sent to the robot, and ωy is the measured angular velocity of the robot. The identified model is approximately represented by a second order linear model,

(2)

with kω = 0.45, aω = 104.6, bω = 9.21.

B. Camera Model

A pinhole model for the camera is considered. The following relationship can be immediately obtained from Fig. 2,

(3)

where r is the projection of a point p on the image plane, λ is the focal length of the camera and α is a scale factor.


Fig. 2. Perspective projection camera model.

C. Differential Camera-Robot Model

This subsection presents the kinematics relationship of the camera mounted on the moving robot evolving with linear velocity v and angular velocity ω. The Coriolis equation renders the motion of a point P in a coordinate system with translational and rotational motion V and Ω,

, (4)

by time-deriving (3) and using both (4) and (3), the components of on the image plane are found as,

, (5)
. (6)

For the camera mounted on the robot's center and pointing forward, Vx = Vy = 0 and ωx = ωz in the [C] camera frame, due to the kinematics constraints of the robot. Besides, by calling v = Vz, ω = ωy, (5) and (6) can be written as,

, (7)

which represent the differential kinematics equations for the camera mounted on the robot.

D. Model for the perspective lines

The position and orientation of the robot can be obtained from the projection of the perspective lines in the corridor on the image plane. The parallel lines resulting from the intersection of corridor walls and floor are projected onto the image plane as two lines intersecting at the so-called vanishing point.

A point p in the global frame [W] can be expressed in the camera frame [C] as,

where

and

with γ the camera tilt angle and θ the robot heading.

Considering the component-wise expressions for the pinhole camera model (3),

(8)

any point in the global coordinate system is represented in the image plane as a projection point with coordinates

(9)
(10)

Now consider the points u1 = [0 0 0]T, u2 = [0 1 0]T, u3 = [d 0 0]T, u4 = [d 1 0]T that define the intersection lines r1 = (u1, u2) and r2 = (u3, u4) between corridor walls and floor, as illustrated in Fig. 3. Based on (9) and (10), the following relationships are obtained for the slope of the perspective lines, the vanishing point coordinates and the intersection of both lines with the horizontal axis in the image plane, Fig. 4.

(11)
(12)
(13)
(14)
(15)
(16)


Fig. 3. Guide lines in the corridor.


Fig. 4. Perspective lines.

III. DATA FUSION

A controller based on the posture of the robot with respect to the centerline of the corridor requires the values of states -displacement of robot from corridor centreline- and θ(t) -robot heading- at each instant. These values can be obtained from sonar measurements as described in Section III.A. It is important to consider other measurements as well, such as the odometric data provided by the robot. The fusion of these data using optimal filters produces optimal estimations of robot states, thus minimizing the uncertainty in sensor measurements. Some authors, e.g. Sasiadek and Hartana (2000), have fused the odometric and sonar data. In this work, we fuse the sonar data with the vision data described in Section III.B. The fusion of sonar measurements and vision measurements is done by using a decentralized information filter, DIF.

A. Data from Ultrasonic Sensors

Figure 5 shows a typical situation of a robot equipped with sonar sensors where lateral sensors S0, S15, S7, and S8 are used. For this case, the following equations allow calculating the state variables,

, (17)
, (18)
. (19)


Fig. 5. Calculation of state variables from distance

Sonar measurements may deteriorate or be impossible to obtain under certain circumstances, like -for example- when the robot is traveling by an open door in the corridor, or when the robot has a significant angle of deviation from the corridor axis. The latter condition originates from the fact that a sonar sensor collects useful data only when its direction orthogonal to the reflecting surface lies within the beam width of the receiver, thus allowing for wall detection only for a restricted heading range (Bemporad et al., 1997). The range for this angle is approximately φ = 17o for the electrostatic sensors in the robot used in the experiences.

B. Data from Vision Sensor

It is important to express the control objective of navigating along the corridor centerline in terms of the image features from perspective lines. The robot is following the centerline of the corridor when the slope of both perspective lines become equal; that is, when xv -the vanishing point- and δx -the middle point between the intersection of both perspective lines with the horizontal axis- are equal to zero, Fig. 4. In the workspace, orientation error θ2 and position robot error relative to the center of the corridor are defined. These errors can be expressed in terms of the image features xv and δx. Eqn. (13) can be written as,

from which,

. (20)

Besides,

By substituting (15) and (16),

and recalling that , can be explicitly expressed as

, (21)

where

Eqs. (20) and (21) render the orientation and position errors as a function of xv and δx.

C. Decentralized Information Filter

The state variables and θ(t) obtained using the data from the ultrasonic and vision sensors are fused using a decentralized information filter (DIF) as presented in Fig. 6.


Fig 6. Decentralized Information Filter.

The variables y and Y that appears in the figure are respectively the information vector and the information matrix. The information matrix is the inverse of the covariance matrix P of the Kalman filter and the vector of information is obtained by multiplying the information matrix by the state vector. More details about this fusion by DIF is given in Freire et al. (2004) and Appendix.

The variance computation for each variable and θ is given by the following recursive equations, illustrated for

where λ = 0.1 is the damping factor, is the mean value and .

IV. CONTROLLERS

A. Controller Based on the Optical Flow

One of the control proposals for navigation along the corridor is based on the calculation of optical flow (Barron and Fleet, 1994) in two symmetric lateral regions on the image plane rx1 = -rx2, Fig. 7. From (7), the horizontal optical flow in these points is given by

(22)


Fig. 7. Schematics of the control proposal

To navigate along the corridor centerline, the control objective on the image plane is to equate the lateral optical flows . Then, from (17)

(23)

In addition, if robot rotation ω = 0, then c pz1 = c pz2, which means that the robot is navigating along the corridor centerline. From (22), the vision model for the lateral optical flow measured at rx1 = -rx2 is

(24)

where J is called the Jacobian of the robot-camera system.

Now, by considering the dynamic model of the robot (2),

(25)

an inverse dynamics control law is regarded

(26)

where η is a variable defined as,

(27)

In (27), ωd is interpreted as the desired angular velocity, which is set to zero in order to comply with the control objective of maintaining a stable navigation along the corridor. Besides, kpω, kdω, are design gains. In order to include the exteroceptive information of optical flow, the inverse of relation (24),

, (28)

is substituted in the term of angular velocity error in (27),

(29)

By combining (25) and (26) the closed-loop equation is obtained as,

which implies ω(t) → 0 as t → ∞. From (22) with ω = 0, . Then, the unique navigation condition is verified at the centerline of the corridor.

B. Controller Based on the Robot Posture with Respect to the Centerline of the Corridor

In this second control proposal, the design objective is to obtain a controller which, based on estimated values of the state variables θ and obtained from data fusion, attains

that is, the control navigation objective is asymptotically obtained. To this aim, the following control law is proposed

(30)

where and are variables designed to avoid saturation of control signals, as it will be explained later.

By considering (1) and (30) with state variables θ and , the unique equilibrium point of the closed loop equation is at [0 0]T. Asymptotic stability of the control system can be proved by regarding the following Lyapunov function

and by applying the Krasovskii-Lasalle theorem (Khalil, 1996).

Saturation gains in (30) can be defined as follows (Carelli and Freire, 2003),

, where Ks1 > 0 and a1 > 0, in order to obtain a positive definite function. Doing likewise with, , with Ks2 > 0, a2 > 0.

The constants are selected such that the terms in (30) do not produce saturation of the control signal ωr. Finally, the controller implemented here is an inverse dynamics controller like that of (26) and (27), with ωR stated by (30).

V. FUSION OF CONTROL SIGNALS

The controllers described in Section 3 are redundant, because they have the same control objective: to guide the robot along the corridor centerline. They are based, however, on different principles, which turn difficult their fusion at the measurement level. Here, the fusion of both control commands is proposed, in order to attain a control signal that allows a robust navigation along the corridor. Fusion is achieved through a decentralized information filter (DIF), thus minimizing the uncertainty on both control signals. This uncertainty is evaluated by introducing a time-varying variance function for each controller, using similar equations to those given in Section III.C.

A. Stability of the Control System

Let us consider that, like in Fig. 8, n controllers with the same control objective are used. Then, the following set of control signals from the inverse dynamics controllers (26) are obtained,

The fused control signal can be represented as

(31)

Fig. 8. Output fusion from different controllers.

For an ideal control command ωd = ωdi + Δωdi it corresponds an ideal η such that

or in terms of the fused signal ,

(32)

By equating (25) and (31) and taking (32) into account

. (33)

From (27) and (33), it is now possible to write the following dynamics for the angular velocity error

. (34)

Defining the state vector , equation (34) can be written as

(35)

where

It can be proved that the system described by (32) has an ultimately bounded solution (Khalil, 1996). This means that there exist b, c>0 such that for each α ∈ (0,c) there is a positive constant T = T(α) so that ||x(t0)|| < α ⇒ ||x(t)|| ≤ btt0 + T(α), where b is the ultimate bound. By regarding the following Lyapunov candidate

V = xTPx, P = PT > 0

its time derivative is

, (36)

where AT P + PA = -Q. Besides, considering bounds on both terms of (36)

≤ -λmin (Q)||x||2 + 2λmax (P)||x|| ||δ(x)||. (37)

From (36) ||δ(x)|| ≤ |Δ|. By regarding (37), ≤ -(1 - θ)λmin (Q)||x||2 - θλmin(Q)||x||2 + 2λmax (P)||x|| |Δ| with 0 < θ < 1. Finally, it results

so that the ultimate bound is

Since a DIF is being used to fuse the control signals, the ultimate bound on the standard deviation of ultimate error is smaller than that corresponding to the errors produced by each controller.

VI. EXPERIMENTAL RESULTS

In order to evaluate the performance of the proposed control system, several experiences were done on a Pioneer 2DX mobile robot with an on-board Sony PTZ CCD camera. The images are transmitted via RF to the image processing units: one PC for optical flow calculation, and a second one for the corridor perspective lines calculation. A third PC is used to process the ultrasonic data, to calculate the control actions and to perform their fusion. All PCs are connected via TCP/IP. The resulting control action is sent to the robot via RF.

The optical flow calculation was implemented using the Least-Mean-Square Method (Dev et al., 1997b). The corridor perspective lines are calculated using Hough transforms. The information of the image processing is updated every 200 msec. The camera constants values are: αx=166000 αy=166000 pixels/m, λ=0.0054m, γ=-5o, h=0.31m. The robot navigates with linear velocity v=0.2 m/s. The controllers design parameters, for the optical flow controller are set to: k = 20, k = 1; and for the controller based on the position of the robot with respect to the centerline of the corridor, the parameters are set to: k = 10, k = 6, Ks1 = 0.24 rad/s, Ks2 = 0.48 r2/m, a1 = 0.2 rad, a2 = 0.1 m.

If a hole appears in the wall (door or corridor) it produces an abrupt increase in state variable due to an abrupt increase in dright or dleft, Eqs. (17), (18) and (19). This fact makes the variance associated to these data to grow, and it becomes greater than the variance from the perspective lines calculation. Regarding the controller based on the optical flow, when the visual sensor is not detecting flow (wall surface with no texture), this controller is disconnected in practice, because no information is being captured from the environment. Figure 9 shows the trajectory of the robot navigating along a corridor at the Institute of Automatics, National University of San Juan, Argentina. The experiment is designed in a way that the robot finds different sensing and environment conditions during navigation. This varying condition produces changes in the variance of the control action for each controller. The evolution of these variances is shown in Fig. 10. The data fusion obtained from ultrasonic sensors and perspective lines is shown in Fig. 11. Figure 12 depicts the control actions obtained from the controller based on robot posture, the controller based on optical flow and the fusion of both control actions. The experiment shows a good performance of the robot evolution when navigating along the corridor centerline, independently of the varying environment conditions. Finally, to emphasize the robustness of robot navigation along a corridor using the proposed control system, Fig. 13 shows the trajectory of the robot navigating with only one controller, that one based on optical flow. When the wall has no visual texture to guarantee consistent measurements, as it happens along the second half of the corridor, the controller performance notably degraded, as shown in Fig. 13.


Fig traject. 9. Mobile robot ory.


Fig. 10. Variances from Line and Optical Flow controllers.


Fig. 11. X-tilde and Theta from data fusion.


Fig. 12. Control actions


Fig. 13. Mobile robot trajectory using optical flow controller.

VII. CONCLUSIONS

This work has presented a control strategy for mobile robots navigating in corridors, using the fusion of control signals from vision based controllers. To this aim two controllers have been proposed: one based on the optical flow calculation and the other based on the perspective lines in the corridor. Both controllers generate angular velocity commands to keep the robot navigating along the corridor, and they compensate for the dynamics of the robot. The fusion of both control signals was realized by using a decentralized information filter, DIF. Stability of the resulting control system was analyzed and experiments on a laboratory robot were presented, showing the performance of the proposed control system.

APPENDIX

A. The Information Filter

The equations of the Kalman fillter are presented first, for they are the basis to derive the equations corresponding to the decentralized information filter.

Such equations are

and

,

where is the (n x 1) state vector of the process, Φ(k) is the (n x n) matrix of state transition, z(k) is the (m x 1) vector of observations, and H(k) is the (m x n) observation matrix, all at the k-th instant. The (n x 1) vector w(k) consists of a sequence of Gaussian noise with known covariance R(k) (the estimation noise), and the (m x 1) vector v(k) represents the measurement error, which is a sequence of Gaussian noise with known covariance Q(k) (measurement noise).

The information filter is an algebraic equivalent of the Kalman fillter (Mutambara, 1998) that is based on the Eq.

Y(k) = P-1(k),

where Y(k) is called the information matrix and P(k) is the matrix of error covariance, both at k-th instant. The state information vector, at the same instant, is represented as

.

The prediction equations are

while the estimation equations are stated as

The variable L(k) is called the coeficient of information propagation, i(k) is the state contribution and I(k) is the matrix of information associated to each state.

B. The Decentralized Information Filter

A block diagram characterizing the decentralized information filter is shown in Fig. 14. This filter is initialized in an easier way, uses simpler equations and has a faster convergence, when compared to the decentralized Kalman filter. In addition, the higher-order matrix to be inverted is of equal dimension to that of the state vector, and not to that of the observation vector. Such remarkable features are the basis on which we have chosen the decentralized information filter for this work.


Fig. 14. The decentralized information filter.

The equations describing the decentralized information filter are basically a group of equations defining the local filters, plus one equation defining the global filter. The equations corresponding to the local filters are

Yi = Y(k - 1) + Ii(k),

where Yi is the local information matrix, is the local information vector, Y is the global information matrix and is the global information vector. For the global filter, one gets

where k indicates the instant of time considered and n is the number of local filters.

ACKNOWLEDGEMENTS
The authors gratefully acknowledge SETCIP and CONICET (Argentina), and the Universidade Federal de Sergipe (Brazil) for partially funding this research.

REFERENCES
1. Arkin, R. and T. Balch, "AuRA: principles and practice in review", Experimental and Theoretical Artificial Intelligence, 9, 175-189 (1997).         [ Links ]
2. Barron, J. L., D. J. Fleet and S. S. Beauchemin, "Performance of optical flow techniques". IJVC, 12, 43-47 (1994).         [ Links ]
3. Bemporad, M., Di Marco and A. Tesi, "Wall-following controllers for sonar-based mobile robots", Proc. 36th. IEEE Conf. on Decision and Control, San Diego, USA, 3063-3068 (1997).         [ Links ]
4. Bicho, E. The dynamic approach to behavior-based robotics, PhD. Thesis, University of Minho, Portugal (1999).         [ Links ]
5. Carelli, R. and E. Freire, "Corridor Navigation and Wall-Following Stable Control for Sonar-Based Mobile Robots", Robotics & Autonom.Systems, 45, 235-247 (2003).         [ Links ]
6. Carelli, R., C. Soria, O. Nasisi and E. Freire, "Stable AGV Corridor Navigation with Fused Vision-Based Control Signals", IECON'02 - 28o Annual Conf. of the IEEE Ind. Electronics Society, Sevilla, Spain, 2433-2438 (2002).         [ Links ]
7. Dev, A., B. Kröse and F. Groen, "Navigation of a mobile robot on the temporal development of the optic flow", Proc. of the IEEE/RSJ/GI Int. Conf. on Intelligent Robots and Systems IROS'97, France, 558-563 (1997a).         [ Links ]
8. Dev, A., B.J.A. Kröse and F.C.A. Groen, "Confidence measures for image motion estimation", RWCP Symposium, Tokio, Japan, 1999-2006 (1997b).         [ Links ]
9. Dixon, W., D. Dawson, E. Zergeroglu, A. Behal. Nonlinear Control of wheeled mobile robots, Springer Verlag (2001).         [ Links ]
10. Freire, E., T. Bastos Filho, M. Sarcinelli Filho, R. Carelli, "A New Mobile Robot Control Architecture: Fusion of the Output of Distinct Controllers", IEEE Trans. Systems Man and Cybernetics Part B-Cybernetics, 34, 419-429 (2004).         [ Links ]
11. Khalil, H. K., Non-linear systems, Second Edition. Prentice-Hall, (1996).         [ Links ]
12. Mutambara A.G.O., Decentralized Estimation and Control for Multi-sensor systems, CRC Press, USA (1998).         [ Links ]
13. Pirjanian, P., "Multiple objective behavior-based control", Robotics and Autonomous Systems, 31, 53-60 (2000).         [ Links ]
14. Rosenblatt, J. DAMN: A distributed architecture for mobile navigation, PhD Thesis, Carnegie Mellon University, USA. (1997).         [ Links ]
15. Saffiotti, A., K. Konolige and E. Ruspini., "A multivaluated logic approach to integrating planning and control", Artificial Intelligence, 76, 481-526. (1995).         [ Links ]
16. Santos-Victor, J., G. Sandini, F. Curotto and S. Garibaldi, "Divergent stereo in autonomous navigation: from bees to robots", Int. J. of Computers Vision, 14, 159-177 (1995).         [ Links ]
17. Sasiadek, J.Z. and P. Hartana, "Odometry and sonar data fusion for mobile robot navigation", 6th. IFAC Symposium on Robot Control, SYROCO'00. Vienna, Austria. Preprints, II, 531-536 (2000).         [ Links ]
18. Segvic, S. and S. Ribaric, "Determining the Absolute Orientation in a Corridor using Projective Geometry and Active Vision", IEEE Trans. on Industrial Electronics, 48, No. 3, 696-710 (2001).         [ Links ]
19. Vassallo, R., H. J. Schneebeli and J. Santos–Victor, "Visual navigation: combining visual servoing and appearance based methods", SIRS'98, Int. Symp. on Intelligent Robotic Systems, Edinburgh, Scotland, 334-337 (1998).         [ Links ]
20. Yang, Z. and W. Tsai, "Viewing corridors as right parallelepipeds for vision-based vehicle localization", IEEE Trans. on Industrial Electronics, 46, 653-661 (1999).
        [ Links ]

Received: September 21, 2005.
Accepted for publication: February 6, 2006.
Recommended by Guest Editors C. De Angelo, J. Figueroa, G. García and J. Solsona.

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons