## Servicios Personalizados

## Revista

## Articulo

## Indicadores

- Citado por SciELO

## Links relacionados

- Similares en SciELO

## Compartir

## Latin American applied research

##
*versión impresa* ISSN 0327-0793

### Lat. Am. appl. res. v.35 n.2 Bahía Blanca abr./jun. 2005

**A method solving an inverse problem with unknown parameters from two sets of relative measurements**

**J. R. Vega ^{1}, G. L. Frontini^{2}, L. M. Gugliotta^{1}, and G. E. Eliçabe^{2}**

^{1} *Instituto de Desarrollo Tecnológico para la Industria Química (INTEC), CONICET and Univ. Nac. del Litoral, 3000 Santa Fe, Argentina jvega@ceride.gov.ar; lgug@intec.unl.edu.ar*

^{2}

*Instituto de Investigación en Ciencia y Tecnología de Materiales (INTEMA), Univ. Nac. de Mar del Plata, 7600 Mar del Plata, Argentina.*

gfrontin@fi.mdp.edu.ar; elicabe@fi.mdp.edu.ar

gfrontin@fi.mdp.edu.ar; elicabe@fi.mdp.edu.ar

*Abstract* ¾ This work deals with an ill-posed inverse problem in which a distribution function, *f*(*x*), is estimated from two independent sets of non-negative relative measurements. Each measurement set is modeled through a Fredholm equation of the first kind, with unknown parameters in its kernel. While the first measurement model only includes a scalar unknown parameter, *p*_{0}, the second model contains a vector of unknown parameters, p. The proposed method consists of the following steps: (i) to obtain a first estimate of *f*(*x*) and *p*_{0} from the first measurement; (ii) to estimate the vector p from the second measurement and the previous estimate of *f*(*x*); and (iii) to estimate an improved *f*(*x*) by simultaneously using both measurements and the estimated parameters in a unique combined problem. The proposed algorithm is evaluated through a numerical example for simultaneously estimating the particle size distribution and the refractive index of a polymer latex, from combined measurements of elastic light scattering and turbidity.

*Keywords* ¾ Inverse Problem. Parameter Estimation. Combined Measurements. ELS. Turbidity.

** I. INTRODUCTION **

Inverse problems are frequently present in most of the measurement systems that include non-ideal devices, or when only indirect measurements are available. Indirect measurements arise when a physical property of a sample, *f*(*x*), must be estimated from the measurement of a different physical quantity, *g*(*y*). The estimation problem consists in finding *f*(*x*) from *g*(*y*), on the basis of theoretical measurement models that relate these variables. Often, this kind of inverse problem is numerically 'ill-conditioned'; *i.e.*, small changes in the measured variable (for example, originated by different noise levels), may lead to large changes in the estimated variables (Kirsch, 1996). As a consequence, simulation of different *f*(*x*) can generate almost the same *g*(*y*), thus complicating the estimation problem.

Regularization methods replace the unstable original problem by a similar one, but stable or well-conditioned; and they usually include adjustable parameters, *a priori* knowledge of the solution, or some smoothness condition (Kirsch, 1996; Engl *et al.*, 1996; Hansen, 1994). Often, when a regularization condition is included, then several solutions can be attained. Typically, a trade-off solution must be selected: (i) a strong regularization modifies the original problem, with the advantage of leading to a smooth solution; and (ii) a weak regularization keeps the original problem almost unchanged, but can originate oscillating solutions. In general, all regularization method includes (at least) an adjustable parameter, which is usually selected by the user on the basis of the obtained solutions. Alternatively, the regularization parameter can be automatically estimated through some numerical methods, as for example the Generalized Cross Validation (GCV) technique (Golub *et al.*, 1979), or the L-curve technique (Hansen, 1994).

Combination of two or more independent sets of measurements (*e.g.*, obtained from two different equipments), allows increasing the information content in the problem, and can contribute to improve the quality of the estimates. The numerical treatment of the combined problem is simple when the indirect measurements are absolute (*i.e.*, when all proportionality constants are known), and the involved mathematical models are linear (Eliçabe and Frontini, 1996). However, if the model related to the measured physical quantities includes unknown proportionality constants, then the measurements are relative, and a 'normalization' parameter is required to adequately combine all sets of measurements in a unique problem (Frontini and Eliçabe, 2000). In such cases, the resulting combined problem may be nonlinear, and some alternative algorithms have been proposed to solve it. For example, Frontini and Eliçabe (2000) developed a method to estimate in a single step *f*(*x*) and the normalization parameter. Alternatively, Vega *et al.* (2001) and Vega *et al.* (2003), proposed a "two-step" procedure that involves the resolution of two linear estimation problems. In the first step, the normalization parameter is estimated on the basis of conciliating the two independent measurements; and in the second step, the sought *f*(*x*) is obtained after numerical inversion of the combined problem.

The estimation problem becomes more difficult when the mathematical model is not exactly known, or when some parameters are unknown. Typically, several linear inverse problems with unknown kernel parameters, may be described through the following Fredholm equation of the first kind:

(1) |

where *f*(*x*) is an unknown function of the distributed variable, *x*;) represents the indirect measurement of *f*(*x*), at each measurement point, *y*; the kernel *A* contains a set of unknown parameters, *p*(*y*); and *k* is a proportionality constant that adequately scales the model predictions to the real measurements. The constant *k* includes the detector gain and it is often unknown. Moreover, sometimes such constant may depend on the unknown *f*(*x*) distribution (Vega *et al.*, 2003). In practice, both continuous variables (*x* and *y*) of Eq. (1) are limited to a finite range of values.

In the case of only one unknown parameter in the kernel (*i.e.*, when *p* = *p*_{0} = constant), the estimation of *f*(*x*) and *p*_{0} from Eq. (1) has already been considered (Frontini and Fernández Berdaguer, 2003). In this work, a more general case when the unknown parameters depend on the independent measurement variable [*i.e.*, *p* = *p*(*y*)], will be investigated. The next section details the problem statement and proposes a method for estimating both *f*(*x*) and *p*(*y*). In section III, a numerical example is considered to evaluate the proposed method. The example simulates the estimation of the particle size distributions (PSD) of a polymer latex from turbidity (T) and elastic light scattering (ELS) measurements, when the refractive index of the polymer particles is unknown.

**II. THEORY**

**A. Formulation of the Discrete Combined Problem**

Assume that each independent variable (*x* and *y*) of Eq. (1) is discretized at regular intervals Δ*x* and Δ*y*, respectively. Then, the discrete version of Eq. (1) can be interpreted as a set of 'm' algebraic equations (one for each *y* value), in 'n' unknowns (one for each *x* value). Thus, Eq. (1) can be written in the following vector equation:

g = k A(p) f | (2) |

where the vector **g** (m×1) contains the measurements; the matrix **A** (m×n) is theoretically known, but it has 'r' unknown parameters represented by the vector **p** (r×1); and the vector **f** (n×1) represents the unknown ordinates of the distribution. Usually, n < m and r £ m.

Consider two different sets of measurements *g*_{1}(*y*_{1}) and *g*_{2}(*y*_{2}), obtained from two different equipments. The following is assumed: (1) both measurements are non-negative; (2) in measurement *g*_{1}(*y*_{1}), the kernel *A*_{1} has only one unknown parameter, *p*_{0}; and (3) in measurement *g*_{2}(*y*_{2}), the kernel *A*_{2} has one unknown parameter for each value of the independent variable *y*_{2}, *p*(*y*_{2}). Then, the discrete equations for the measurements are represented by:

g_{1} = k_{1} A_{1}(p_{0}) f | (3.a) |

g_{2} = k_{2} A_{2}(p) f | (3.b) |

where *k*_{1} and *k*_{2} are normally unknown proportionality constants. Eqs. (3) can be rewritten as:

g_{1} = A_{1}(p_{0}) f | (4.a) |

g_{2} = K A_{2}(p) f_{r} | (4.b) |

where **f _{r}** =

*k*

_{1}

**f**, and

*K*=

*k*

_{2}/

*k*

_{1}is the 'normalization' parameter defined in Vega

*et al.*(2001), and Vega

*et al.*(2003). Vector and matrix dimensions in Eqs. (4) are:

**g**(m

_{1}_{1}×1),

**g**(m

_{2}_{2}×1),

**A**(m

_{1}_{1}×n),

**A**(m

_{2}_{2}×n),

**f**(n×1), and

_{r}**p**(m

_{2}×1).

Equations (4) can be combined in a unique expression, as follows:

g = A(K, p_{0}, p)f_{r} | (5) |

with:

(6.a) | |

(6.b) |

where **g** [(m_{1}+m_{2})×1] is the vector of combined measurements; **A** [(m_{1}+m_{2})×n] is the combined matrix; and *g*_{1,max}, *g*_{2,max} are the maximum values of **g**_{1} and **g**_{2}, respectively. The scaling by *g*_{1,max}, *g*_{2,max} in Eqs. (6) is used to treat the inversion problem of Eq. (5) as if the noise variances were the same for both sets of measurements (Eliçabe and Frontini, 1996).

The problem to be solved consists in finding *K*, *p*_{0}, **p**, and **f _{r}**, from the measurements

**g**and

_{1}**g**(although distribution

_{2}**f**is the original unknown, the estimation of

**f**is thoroughly equivalent because

_{r}**f**and

**f**are proportional each other through the scaling factor

_{r}*k*

_{1}). In principle, all unknowns could be estimated through some numerical methods (

*e.g.*, a nonlinear optimization routine) applied to the nonlinear system of Eqs. (5, 6). In practice, however, such possibility is discarded because most problems are extremely ill-posed, and solutions obtained through standard methods would be rather poor. Alternatively, specific numerical methods for ill-posed problems should be applied. Even though a specific inversion method is available, only Eq. (4.a) could be solved. In fact, Eq. (4.a) can be seen as a set of m

_{1}nonlinear algebraic equations that includes n+1 ( < m

_{1}) unknowns; whereas Eq. (4.b) expands in a set of m

_{2}nonlinear algebraic equations in n + m

_{2}+ 1 ( > m

_{2}) unknowns. Thus, Eq. (4.b) could only be solved if an estimate of

**f**or

_{r}**p**is available.

**B. Estimation of the unknown parameters 'p' **

Assume that an acceptable estimate of **f _{r}** () is available. Since vector

**p**affects

**g**, Eq. (4.b) can be used to calculate

_{2}**p**, provided that

*K*is known. Even when

*K*is unknown, Eq. (4.b) allows estimating

**p**in the two following cases:

__ Case 1__: for any

**p**¹

_{1}**p**, then

**A**¹

_{2}(p)f_{r}*q*

**A**, regardless of the selected scalar constant

_{2}(p_{1})f_{r}*q*. In such case,

*p*(

*y*

_{2}) mainly affects the shape of

*g*

_{2}(

*y*

_{2}), and

*K*can be eliminated through an adequate normalization of Eq. (4.b), as follows:

(7) |

where ||.||_{1} stands for the standard 1-norm of a vector (*i.e.*, ||**a**||_{1} = S |*a _{i}*|). Equation (7) expands in a set of m

_{2}nonlinear algebraic equations in m

_{2}unknowns (the components of

**p**).

__ Case 2__: (i) for some

**p**¹

_{1}**p**, there is a scalar constant

*q*that verifies:

**A**@

_{2}(p)f_{r}*q*

**A**; and (ii)

_{2}(p_{1})f_{r}*p*

_{0}is a known component of

*p*(

*y*

_{2}),

*i.e.*: at a given

*y*

_{2}=

*y*

_{2,0},

*p*

_{0}=

*p*(

*y*

_{2,0}). In such case,

*p*(

*y*

_{2}) mainly affects the magnitude (but not the shape) of

*g*

_{2}(

*y*

_{2}). In this case, a first estimate of

*K*can be calculated from Eq. (4.b) evaluated at

*y*

_{2}=

*y*

_{2,0}, yielding:

(8) |

Then, Eq. (8) is introduced into Eq. (4.b), and can be obtained by solving:

(9) |

Equation (9) can be directly solved for the remaining m_{2} - 1 unknown components of **p**. Both Eq. (7) and Eq. (9) can be solved through a classical estimation algorithm for a set of nonlinear algebraic equations, such as the Gauss-Newton or the Levenberg-Marquard methods (see for example, Dennis, 1977; and Moré, 1977).

**C. The Proposed Estimation Method**

To estimate all unknowns, the following four-steps procedure is proposed.

*Step 0 (initial conditions)*. Solve the inverse problem of Eq. (4.a), to find a first estimate of **f _{r}** (), and the estimate of

*p*

_{0}(). Only the measurement

**g**is required. The method given by Frontini and Fernández Berdaguer (2003) can be applied.

_{1} *Step 1 (estimate of ***p***)*. Obtain an estimate of **p** (), by solving Eq. (7) in *Case 1*, or Eq. (9) in *Case 2*. The first estimate of **f _{r}**, and the measurement

**g**are required.

_{2} *Step 2 (estimate of K)*. Estimate the normalization parameter *K* that conciliates Eqs. (4), on the basis of the **p** and **f _{r}** estimates given in the previous steps. The method proposed by Vega

*et al.*(2003) can be applied, which provides the following solution:

(10) |

with .

*Step 3 (estimate of ***f _{r}**

*)*. Estimate an improved

**f**distribution (), by numerically inverting Eqs. (5, 6), with the estimates of

_{r}*p*

_{0},

**p**, and

*K*given in the previous steps.

The proposed numeric method is schematically represented in Fig. 1. Notice that: (i) each step can be sequentially implemented and solved; (ii) in practice, the main outputs of this algorithm are the estimates of **p** and **f _{r}**; (iii) only one of Eq. (7) or Eq. (9) must be implemented in

*Step 1*, depending on the corresponding

*Case*; and (iv) for inverting Eqs. (5, 6), the estimate of

*p*

_{0}is only used in

*Case 1*if

*p*

_{0}does not belong to vector

**p**; whereas in

*Case 2*,

*p*

_{0}is directly included in the estimate of vector

**p**.

**Figure 1**. *The proposed calculation scheme.*

**III. NUMERICAL EXAMPLE**

To evaluate the proposed estimation method, a simulated numerical example is presented. "Synthetic" or simulated examples are ideal for investigating data treatment procedures, because the solutions are* a priori* known. The basic example already investigated by Vega *et al.* (2003) shall be analyzed, but now under the assumption of several unknown parameters in the mathematical model.

Consider a polymer latex sample, which basically consists of spherical particles suspended in an aqueous medium. Typically, the particle diameters (*D*) of a latex ranges from several nanometers to some micrometers, with concentrations of *c*.*a*. 10^{9} part./cm^{3}. Since particles differ in their sizes, a whole size distribution is present in the sample. In general, the number PSD, *f*(*D*), represents the amount of particles present in a given interval of diameters (*D* + d*D*).

The analysis of a polystyrene (PS) polymer latex with water as the dispersion medium, and a global concentration of 5×10^{9} part/cm^{3} is investigated. The *a priori* known (discrete and bimodal) PSD consists of 56 evenly spaced points in the diameter range [50, 600] nm. It was obtained by combining two normal-logarithmic distributions, with mean diameters of 200 and 400 nm, standard deviation of 0.15 and 0.075, and number fractions of 0.85 and 0.15, respectively.

Two experiments based on ELS and T measurements are simulated. In the ELS experiment, a monocromatic light beam (of wavelength λ_{0} = 632.8 nm) falls onto the diluted particle sample, and a light intensity *I*_{s}(θ) is scattered by the particles at each angle, θ. In ELS, the measurement is the intensity light ratio *I*(θ) = *I*_{s}(θ) / *I*_{s}(θ_{0}), with respect to a reference angle, θ_{0}; and it can be calculated using the Mie theory, as follows (Bohren and Huffman, 1983):

(11) |

where *C*_{I}(θ,* D*) represents the light intensity scattered by a particle of diameter *D* at the angle θ, and it is calculated through the Mie scattering theory (Bohren and Huffman, 1983); *k*_{I} is an unknown constant; and *n*_{p}(λ_{0}) [= *n*_{p,0} = 1.5729] represents the particle refractive index at the experiment wavelength. A range of measurement angles [θ_{min}*,*, θ_{max}] = [12^{o}, 150^{o}] is considered, with measurements taken each 2^{o}. The relative measurements *I*(θ) are commonly reported in angular light scattering determinations (Glatter *et al.*, 1985).

In the T experiment, *I*_{i}(λ) is the intensity at the light source, *I*_{t}(λ) is the intensity at the light detector, and λ is the wavelength of the incident light in vacuum. The measurement is the turbidity spectrum, τ(λ), which is defined as the light intensity attenuation experienced by the beam of light traversing the diluted sample; *i*. *e*., (λ) = ln[*I*_{i}(λ) */* *I*_{t}(λ)]. For spherical particles, τ(λ) is related to *f*(*D*) according to Mie theory (Bohren and Huffman, 1983), as follows:

(12) |

where *Q*_{ext} is the extinction efficiency of a particle of diameter *D* at λ; *k*_{τ} is a constant that depends on the optical path length; and *n*_{p}(λ) represents the particle refractive index at each λ. Upper and lower limits for λ [λ_{min}, λ_{max}] = [306, 701] nm are also considered, with measurements taken each 5 nm.

For a given system, *k*_{τ} could be in principle exactly calculated as *k*_{τ} = π /4, where is the optical path length (basically, the thickness of the cell). This is not the case for *k*_{I}, however, that depends on the PSD through the following theoretical relationship (Vega *et al.*, 2003): .

Consider now the discrete PSD (with the diameter axis sampled each 10 nm), described by:

**f** = [*f*(*D*_{1}) *f*(*D*_{2}) ... *f*(*D*_{n})]^{T} (13)

Assume the ELS and T measurements respectively represented by the following two vectors:

(14.a) | |

(14.b) |

with *k*_{I} = *k*_{1} = 2.0252×10^{-10}, and *k*_{τ} = *k*_{2} = 0.19635 cm; thus *K* = 9.6953×10^{8} cm.

After calculating the discrete version of Eqs. (11) and (12), a problem thoroughly equivalent to that of Eqs. (3) is obtained. The unknown parameters are *n*_{p}(λ); and *n*_{p}(λ_{0}) is one of the *n*_{p}(λ), because λ_{0} belongs to the selected range of λ.

Measurements of ELS and T were simulated according to Eqs. (11) and (12). A random Gaussian noise of standard deviation equal to 0.1% of the maximum of each measurement was added. Both measurements are represented in Fig. 2 as continuous functions (thick trace), although they have a finite number of points (m_{1} and m_{2}) indicated in the figures. To quantify the effects of the parameters on the measurements, uncertainties of ±1% around the true values of *n*_{p,0} and *n*_{p}(λ) were simulated, and the corresponding measurements are represented in thin traces in Fig. 2. The simulation results indicate that a constant error in *n*_{p}(λ) mainly modifies the magnitude of τ(λ), with a slight effect on its shape. Thus, the parameter estimation problem corresponds to *Case 2*, and *n*_{p}(λ) should be estimated through Eq. (9).

**Figure 2.*** Sensitivity of: (a) ELS measurement [to* ±*1% in n*_{p,0}*]; (b) T measurement [to* ±*1% in n*_{p}(λ)].

Throughout this work, the Phillips-Tikhonov regularization method was applied to solve all inversion problems. In that method, if **A** is an ill-conditioned matrix, a regularized pseudo-inverse of **A** may be computed through: **A**^{[-1]} = (**A**^{T} **A** + α **H**)^{-1} **A**^{T}, where **H** is a selected regularization matrix, and α is the adjustable regularization parameter (Phillips, 1962; Tikhonov and Arsenin, 1977).

The algorithm detailed in the previous section was applied. The main simulation results are presented in Fig. 3. From the ELS measurement, *I*(θ), the method developed by Frontini and Fernández Berdaguer (2003) was applied to solve the *Step 0* of the algorithm. The following was obtained: = 1.5757, and the first PSD estimate (**f _{r}**

_{,0}), indicated as in Fig. 3. Despite clearly differs from the true PSD, Eq. (11) is fulfilled by the estimates and , thus revealing the ill-posed characteristic of the inverse problem. Also, the first estimate of the normalization parameter ( = 9.6881×10

^{6}) was obtained after evaluating Eq. (8) at λ

_{0}= 632.8 nm.

**Figure 3.*** (a) The true refractive index, n*_{p}*(*λ*), and its estimate, . (b) The true PSD, f(D); and its estimates: , , and* .

Based on and , Eq. (9) was used to estimate *n*_{p}(λ); and from Eq. (10), = 9.6543×10^{6} was obtained. Then, Eqs. (5, 6) were inverted to obtain the estimated PSD, . These estimates are represented in Fig. 3. For comparison, the PSD estimate when all parameters are exactly known is also represented in Fig. 3 as . This estimate was obtained by solving Eqs. (5, 6); *i.e.*, *Step* *3* of the algorithm. Notice that the refractive index estimate, , resulted slightly higher than the true value, as a consequence of the erroneous , but the shape of the true *n*_{p}(λ) is closely recuperated. Thus, if a better initial estimate of *n*_{p,0} were available, then a practically exact estimation of the refractive index function would be obtained. The accuracy of the estimated *n*_{p}(λ) ranged from 0.15 % to 0.35 % with respect to the true value.

Due to the biased , the solution of the combined problem in *Step 3* originates an erroneous *K* estimate, even worse than the initial estimate, . However, the PSD estimate resulted somewhat better than the original .

To evaluate the quality of the PSD estimates, the following performance indexes were defined:

(15.a) | |

(15.b) |

where *D _{w}*(

*f*) is the weight average diameter of the PSD, that can be calculated as:

(16) |

The errors corresponding to the three PSD estimates are indicated in Table 1. The error of the initial estimate is relatively high mainly because the first peak of resulted shifted with respect to the true PSD. On the other hand, the errors of and are practically equal, despite *n*_{p}(λ) was not exactly estimated. In all cases, however, an accurate mean diameter was obtained, with relative errors lower than 1%.

**Table 1**. PSD estimation errors [*D _{w}*(

*f*) = 325.0 nm]

Although the proposed algorithm only considers the sequential steps above commented, some additional simulations were implemented to explore a further improve of the estimates. The obtained was used to recalculate *n*_{p,0}, but no meaningful change in its value was observed. Then, was injected into Eq. (9) instead of ; but the new estimate of resulted practically coincident with its original estimate. Thus, at least for this specific problem, iterations do not allow an improvement of the solution.

**IV. CONCLUSIONS**

A numerical method for solving an ill-conditioned inverse problem with unknown kernel parameters was presented. The algorithm assumes two available sets of indirect measurements to be known, and it consists of four sequential steps that are based on the mathematical model of the measurements. While the first measurement set is unaffected by the unknown kernel parameters, each measurement point of the second set directly depends on a different unknown parameter. Also, the method considers the more general case when the measurements are not absolute, *i.e.* when unknown proportionality constants are included.

If the problem were well conditioned, then no uncertainty in *Step 0* would be present; and after *Step 1* the problem should become completely solved. This is not the case for ill-conditioned problems, however. In such cases, the outputs of *Step 0* can be erroneous, and therefore *Step 1* will not provide exact estimates of the parameters. The inclusion of *Steps 2* and *3* in the algorithm allows increasing the information content in the problem, by combining the measurements.

The algorithm can be applied in the more general case of indirect measurements, and when typically unknown proportionality constants are present. If such constants were *a priori* well-known, Eq. (9) should be replaced by Eq. (4.b), and Eq. (10) should be discarded.

Each step of the proposed algorithm can be sequentially implemented. However, the estimation of *K* after *Step 2* should coincide with its initial estimate given by Eq. (8). Any detected differences between those values could in principle be used to implement an iterative procedure that allows improving the quality of the estimates. Such methodology should be further investigated to establish conditions that guarantee convergence of the iterations.

Extension of this algorithm for including measurements of other physical variables, or for incorporating a third set of measurements, seems to be straightforward. Inclusion of a new measurement provided by a different device will require the development of its corresponding mathematical model. Inclusion of a third measurement set will require the estimation of two normalization parameters.

The performance of the algorithm was successfully evaluated through a numerical example. The simulated example is technologically important, because the particle refractive index is often unknown. Moreover, in some cases, the estimation of the particle refractive index may be the key problem to be solved, regardless of the PSD, and to this unique effect, the proposed algorithm may also be applied.

**REFERENCES**

1. Bohren, C. F. and D. R. Huffman, *Absorption and Scattering of Light by Small Particles*, John Wiley & Sons, New York (1983). [ Links ]

2. Dennis, J. E., *Nonlinear Least Squares*, State of the Art in Numerical Analysis, D. Jacobs, Ed., Academic Press, London (1977). [ Links ]

3. Engl, H. W., M. Hanke and A. Neubauer, *Regularization of Inverse Problems*, Kluwer Academic Publishers, Dordrecht, The Netherlands (1996). [ Links ]

4. Eliçabe, G. E. and G. L. Frontini, "Determination of the particle size distribution of latex using a combination of elastic light scattering and turbidimetry: A simulation study", *J. Coll. and Interf. Sci.*, **181**, 669-672 (1996). [ Links ]

5. Frontini, G. L. and G. E. Eliçabe, "Combination of indirect measurements to improve signal estimation", *Lat. Amer. Appl. Res.*, **30**, 99-105 (2000). [ Links ]

6. Frontini, G. L. and E. M. Fernández Berdaguer, "Inversion of elastic light scattering measurements to determine refractive index and particle size distribution of polymeric emulsions", *Inverse Probl. Eng.*, **11**, 329-340 (2003). [ Links ]

7. Glatter, O., M. Hofer, C. Jorde and W. Eigner, "Interpretation of elastic light-scattering data in real space", *J. Coll. and Interf. Sci.*, **105**, 577-586 (1985). [ Links ]

8. Golub, G. H., M. Heath and G. Wahba, "Generalized cross validation as a method for choosing a good parameter", *Technometrics*, **21**, 215-223 (1979). [ Links ]

9. Hansen, C., "Regularization tools: A Matlab package for analysis and solution of discrete ill-posed problems", *Numerical Algorithms*, **6**, 1-35 (1994). [ Links ]

10. Kirsch, A., *An Introduction to the Mathematical Theory of Inverse Problems*, Springer-Verlag, New York (1996). [ Links ]

11. Moré J. J., *The Levenberg-Marquardt Algorithm: Implementation and Theory*, Numerical Analysis, G. Watson, Ed., Lecture Notes in Mathematics 630, Springer Verlag, Berlin (1977). [ Links ]

12. Phillips, D. L., "A technique for the numerical solution of certain integral equations of the first kind", *J. AMC*, **9**, 84 (1962). [ Links ]

13. Tikhonov, A. N. and V. Arsenin, *Solution of Ill-posed Problems*, Wiley, New York (1977). [ Links ]

14. Vega, J. R., G. L. Frontini, L. M. Gugliotta and G. E. Eliçabe, "Estimation of the normalization parameter required to combine measurements in an inverse problem", *Proc. IX Reunión de Trabajo en Procesamiento de la Información y Control*, Santa Fe, Argentina, 150-154 (2001). [ Links ]

15. Vega, J. R., G. L. Frontini, L. M. Gugliotta and G. E. Eliçabe, "Particle size distribution by combined elastic light scattering and turbidity measurements. A novel method to estimate the required normalization factor", *Part. & Part. System Charact.*, **20**, 361-369 (2003). [ Links ]