SciELO - Scientific Electronic Library Online

 
vol.41 número3by the finite element method to the preferential chloride diffusion through interfacial transition zone in concrete índice de autoresíndice de assuntospesquisa de artigos
Home Pagelista alfabética de periódicos  

Serviços Personalizados

Journal

Artigo

Indicadores

  • Não possue artigos citadosCitado por SciELO

Links relacionados

  • Não possue artigos similaresSimilares em SciELO

Compartilhar


Latin American applied research

versão impressa ISSN 0327-0793

Lat. Am. appl. res. vol.41 no.3 Bahía Blanca jul. 2011

 

Mesh optimization with volume preservation using GPU

J.P. D'amato and P. Lotito

PLADEMA-CONICET, Universidad Nacional del Centro, Tandil, Argentina
jpdamato@exa.unicen.edu.ar; plotito@exa.unicen.edu.ar

Abstract - Surface mesh optimization represents a challenge especially when it corresponds to an evolving geometry used in numerical simulations where the mesh is restricted to an enclosed volume and requires the preservation of certain features. In this context, we propose an efficient and parallelizable method to retrieve the local volume after performing successive operations, such as smoothing and topological changes that optimize the quality of the triangles. Due to the complexity of this algorithm, GPU is used as a calculation unit to reduce processing time.
Additionally, we present a simple and robust detection of singularities that helps to preserve the appearance of the mesh during the remeshing process. This process is finally validated by the experimental study in simple geometries (such as spheres and regular polyhedra) and then applied to non-regular meshes obtained from segmentations of medical images or from CAD tools.

Keywords - Mesh Generation; Quality Metrics; Optimization; Volume Preservation.

I. INTRODUCTION

Quality surface mesh generation is still a geometric modeling issue in engineering simulations. The meshes used in these calculations require a given size in a space section and a minimum quality per element. In many simulations, such as fluid through a coronary artery, the meshes must also satisfy boundary conditions entered by the user.

There are many methods for obtaining quality triangulations, some are called direct methods, where the mesh is generated directly on the 3D surface using Octrees (Schroeder and Shephard, 1990) or a frontal method (Dari et al., 1997). Other methods, called indirect, are parameterizations of the original mesh and generate a new domain sampling (Baker, 2004; Freitag and Plassman, 2000). Direct methods have difficulty in checking mesh validity, while indirect ones suffer from the complexity of defining the size and shape of the elements generated in another domain.

Another type of widely accepted methods because of their efficiency consist in remeshing arbitrary triangulations generated by CAD tools or by automatic reconstruction methods, which often includes defective elements or with null area. This process, usually iterative, makes changes on the elements, either by adding or removing vertexes, to make the mesh meet the size requirements; and exchanges edges to improve quality. One of the first to use this strategy was Coupez (1994) who proposed restricted changes on the topology of the mesh, then extended by Heckbert and Garland (1999) who define a metric to choose the edges to be removed without affecting the appearance substantially and Suárez et al. (2009) who introduced a new method of subdivision to obtain better quality triangles. However, the greatest improvements in quality and efficiency come from relocating the vertexes applying local smoothing filters (Brewer et al., 2003) or softing the entire mesh, as in Sorkine et al. (2004).

Another desirable feature of a mesh is the conservation of volume. This feature lets achieve more stable simulations in dynamical systems. It is well known that a simple procedure, such as Laplacian smoothing shrinks significantly the domain volume (Taubin, 1995). Therefore, the preservation of volume has been studied for many jobs in recent years (Zhang et al., 2005; Zhou and Huang, 2005). In this context, several methods where proposed to reduce errors such as volumetric boundaries or vertexes movement constrained to parameterized spaces (Frey and Borouchaki, 1998; Garimella et al., 2002). Even with these restrictions, if smoothing is frequently applied within an adaptive process it tends to accumulate errors.

Our proposal is an iterative remeshing method, similar to Wang (2006) that attempts to achieve reasonable quality elements by reshaping the surface with elements of desired size and maintaining a minimum difference of volume through detection of singularities and projection of the vertexes. The main contribution is an indicator to measure and then reduce the local volumetric error efficiently respect to a reference mesh, after introducing changes in topology or having completed a relocation of vertexes by smoothing.

The next section exploits in detail the steps of the method. The following one describes the data and strategies to improve the quality of the elements. Then it is introduced the proposed volumetric difference estimation and reduction using GPU and the last section shows some evaluation cases.

II. OUR METHOD

The proposed method has the objectives of efficiency, robustness, quality and preservation of the representation. From meshes generated by CADs tools or by automatic systems, it's expected to obtain high quality meshes, performing the following steps:

(1) Defining the initial mesh and the desired element size.

(2) Remeshing the initial surface to meet the requirements of elements size and singularities preservation. The remeshing is conducted through local changes such as edges splitting or collapsing.

(3) Improving the quality of the elements through vertexes relocation and edges swapping. The vertex relocation is based on calculating the optimum position of the vertex within a surrounding environment. Both operations reduce the amount of defective triangles.

(4) Measuring and reducing the volume differences between the modified mesh with the original one. It is estimated how much the mesh has moved away from its initial configuration, suggesting a local indicator and a process to reduce the gap.

We have to remark that the design of the method is modular, it makes possible to change any of the algorithms used in each step by another one in order to improve efficiency or to reduce error. For example, to implement a division of elements like the one shown in Suárez et al. (2009) is enough to change the responsible module of this task.

Each stage is detailed in the following sections.

A. Geometry description

The first step in the process of mesh generation is to introduce the geometry description. The general mechanism is to use objects designed specifically for the problem using commercial CAD tools. However, it's very rare that the system receives the mesh as required, because a CAD designer is only interested in a visually pleasing model without worrying about removing the parts that do not belong to the mesh and does not pay attention if there is no a precise contact between them.

The meshes generated by automated tools obtained from reconstructions of tomography images suffer from the same problems, although the time of definition is greatly reduced. There are a variety of these tools, most of which are intended to generate three-dimensional models for medical analysis. For example, in Jonas et al. (2009) it is used a Snakes based segmentation which from ultrasound images achieves reasonably surface meshes, but not good enough to use in a simulation.

Regardless of how the geometry is generated, the information entered for the surface mesh generator is a triangulation. Generally, a mesh generator requires the boundary of the object without discontinuities, as could be vertexes in the same position but referenced by different elements.

B. Specifying elements size

To generate a finite element mesh, either surface or volume, you must specify the length of the edges of the elements for each point in space as a function h(x, y, z).

This length can be controlled in various ways. In one of them, the user can specify the size through a volume mesh (Peiro et al., 1994). Another way is to use the curvature as a metric for determining the required size which allows to obtain good precision in the surface representation (Borouchaki et al., 2000; Lee 2001). The curvature is especially useful if a parametric representation of the surface is used (Clausse et al., 2004).

For this job, users specify the h function through a volume mesh. After the definition of the mesh size, the standard edge length is computed in a normalized way as in Frey (2000) and a range of acceptance is defined, the elements outside the range will be refined or thickened. It is usually used as a measure from which an edge is divided, and as the extent of collapse. If an edge is longer than SDIV, then it is splitted, if the length is less than SCOL it's collapsed.

This procedure is suitable for adaptive re-meshed. In these situations it is important to get the value h efficiently, requiring a classification strategy for searching through the volume mesh with minimal time.

III. QUALITY IMPROVEMENT AND SIZE CONFORMATION

The initial triangulation is not useful for numerical simulations, as most triangles do not meet the requirement of size or quality. To produce appropriate meshes, we have to improve them (Frey and Borouchaki, 1999; Frey, 2000; Cougny, 1998; Vénere 1997), while preserving certain characteristics or singularities of the meshes. Mesh improvement is performed via refinement and thickening, in addition to local operations such as edges swapping. It also requires vertex movement to improve the quality of elements (Wang et al., 2006).

One of the most accepted ways to define the quality Q of each triangle T has the form of (Eq. 1)

(1)

where AT is the area of the element T and LTi is the length of each edge. This is the approach used in the process outlined.

A. Singularities detection

The geometric feature detection is a critical step in preserving the mesh volume after smooting or thickening the mesh.

There are a variety of techniques for feature detection. In this work we use a technique proposed by Jiao (2006) based on a quadratic tensor analysis and implemented using the connectivity information of the elements. The nodes are identified by analyzing the greatest angle between the normals of their neighbors.

Using this strategy you can find those nodes that are in a sharp region and preserve them along the remeshing process. These nodes are marked as singular ones, like in Fig. 1, and are differentially evaluated while the mesh is modified. After applying the changes to the mesh, e.g. when an edge is splited, new singular nodes can arise and you must recalculate them.


Figure 1: Edges connecting singular vertexes green colored

B. Local changes

To transform the mesh into one that meets the requirements for the finite element method, we use the technique presented in Vénere (1997), also adopted in Wang et al. (2006), and based on three types of local modifications:

1. Node insertion dividing the edges of elements larger than SDIV.

2. Edge collapse for those smaller than specified in SCOL.

3. Swap of diagonals between neighboring triangles where it is possible and desirable, as shown in Fig. 2.


Figure 2. Case of diagonal swapping (left) original triangle (right) resulting triangle.

This optimization technique called connectivity optimization in Wang et al. (2006), involves structural changes aimed to a partial improvement in the mesh. With the first operation nodes in low-densified sections are added, while with the second is the opposite operation. The swapping of diagonals is used to optimize the quality of the elements.

Because we are working on a surface and not on a plane, we need to take certain additional considerations to avoid unacceptable changes of geometry. When you want to delete a vertex or an edge for collapse or swap, you must verify the condition of singularity. If both vertices of an edge are singular, then the swap is ignored. If only one is singular, you must take care that the new configuration of triangles does not change the state of the vertex. This condition restricts the change of diagonals where the two triangles are quasi-coplanar (the dihedral angle is almost flat)

This technique of 3 operations is exclusive to triangular elements and generally produces non-structured meshes. Thanks to it, the obtained discretizations can have differences in densification, ensuring good quality of elements and making it suitable to use in an adaptive context. Another benefit of this method is its low computational cost, which allows users to work interactively. Under certain conditions, the method could be implemented on a parallel architecture.

C. Vertex relocation

The relocation means changing the position of the vertexes without changing the topology of the triangles related to them. In this case, the goal is to find an optimal position for the vertex on the surface.

A simple algorithm for polygonal meshes smoothing is one called filter of relaxation or Laplacian (Plassmann and Bern, 2004). It consists of moving each node from the surface to the geometric center determined by its adjacent. This calculation produces a displacement of points and results in some loss of the original volume. Taubin filter (Taubin, 1995) is a variant of this form, which reduces the error of volume, but requires more evaluations for convergence.

Other authors, like Frey and Borouchaki (1999) proposed to parameterize the vertex set to a curved surface and move the points on it. If any of the points of the mesh is singular, you omit processing it or weighting its position respect to quality improvement. In the implemented process, it's applied a Laplacian smoothing omitting singular points and then, the solution mesh is approached to the original one using the proposed method of Reduction of the Volume Difference.

D. Mesh densification

If elements do not satisfy the requested size, they're modified by inserting a vertex on the longest edges or by collapsing the shortest ones. This operation is within an iterative procedure that chooses lower quality items with respect to their size (compared to the parameters of division and contraction SDIV, SCOL respectively). To find them efficiently, it is useful to use structures that maintain an order of the elements by their normalized length, so the search time is minimal.

In this case, we apply a Hashing-like technique, where the edges are sorted by normalized length. Groups that contain edges longer than SDIV are processed first and the group is dynamically updated. This ensures that the refinement is well done. Then, the same method is applied to shorter edges. This grouping technique results in a robust and stable procedure for re-densification of the initial mesh. In case a smoothing is applied to the whole mesh, it is necessary to regenerate the structure, with a relatively low cost.

In general, for the evaluated cases, we find that the number of contractions is smaller than the divisions of edges, so more cells in the Hashing are assigned to long edges.

IV. REDUCING THE VOLUME DIFFERENCE BETWEEN MESHES

A very important aspect of optimization is that the mesh be close to the original one, even after suffering several modifications. The criterion of detection of singular points is not sufficient, since it does not take into account the smoothed areas of the original mesh.

Some authors (Baker and Pébay, 2003) propose a criterion for estimating the change in volume using the local variations in curvature, checking how much has changed from the original mesh, but it appears to be a poor approximation of the problem. An accurate measurement is the one that takes into account the volume enclosed between the two used meshes, equivalent to the volume of the symmetric difference between the two sets (Fig. 3).


Figure 3: (left) original mesh superimposed to modified one (right) symmetric difference

Calculating the difference for non-oriented surface meshes in space is not a trivial task. Some strategies need to assess the entire domain in order to calculate the approximate distance between the two surfaces (Del Fresno et al., 2007), with a fairly high cost, especially when the algorithm is evaluated many times. In Sánchez Cruz and Dagnino (2007) it is proposed a measure of the difference between 3D voxelized objects as the effort W to move the voxels from an image of interest OI to a reference OR, in the form of (Eq. 2),

(2)

with

where dist(ai,bj) represents the Euclidean distance between a voxel element from the set of interest to the reference set respectively, with a relative high calculation cost (about the square of the number of elements). Another way to calculate all the possible distances between pairs when bj has to be unique, is with a variant of Kuhn-Munkres algorithm (Bondy and Murty, 1976) (also known as the Hungarian algorithm). The cost of the Kuhn-Munkres algorithm is even higher (about the cube of the number of elements).

This paper proposes a new local indicator based on the approximation of the volume enclosed by the two surfaces. To calculate it, we consider the (Eq. 3) where distances are now computed between the vertexes of the source mesh MI and the triangles of the reference mesh MR that are intersected by a ray that pass through the vertex and has the direction of the vertex's normal. The volume element is the product of the triangle area by the average of the distances of the vertices to the mesh of reference.

(3)

where

and

To compute this indicator, it is proposed to find the nearest triangle in MR which is intersected by a ray that moves through the direction of the normal of ν. The normal at ν is calculated as the average incident triangles normals. The search algorithm used is a variant of ray-shooting, where the rays are segments with origin in ν-αn and destination ν+αn, where α≠0, and n is the normal of v and v,nMI. This scheme is used for evaluate only elements at a certain distance range in order to reduce search cost. The value of α should be considered at least as the maximum displacement of all vertices with respect to its original position.

This indicator has a minimum when each ν falls inside a triangle of MR. A simple way to reduce this indicator is projecting MI onto MR, replacing the original position that was obtained in the calculation of the intersection. Figure 4 shows a case in 2D.



Figure 4: (up). Calculation of displacements, (below).New position of the vertexes on the surface of MR

The algorithm computational cost is still too high, O(r×t) where r and t represent the amount of rays and triangles involved respectively. Its efficiency can be substantially improved sorting triangles in an early stage and using a parallelization strategy, as it's shown in the next section.

It can be inferred from Fig. 4 that the criterion is not symmetric (it is not the same to project MI on MR than MR on MI) due to different resolutions. So to actually calculate the volume difference another method should be applied, like average (Eq. 4).

(4)

This proposal to reduce the indicator is simple but not assures the complete volume error reduction between two surfaces, because the volume discretization could be not fine enough. To improve the quality of the estimator, other points besides the vertexes should be considered. The analysis of this case will promote another publication.

A. Implementing the method on GPU

In practice, 3D meshes could have thousands of elements, so to solve this algorithm with that amount of information; it is required some strategy that improves efficiency. In this paper we propose to implement the ray-shooting algorithm on GPUs, as also described in (Ortega Alvarado and Robles Ortega, 2006), but now combined with a 3D division of space through regular cells. Each cell contains references to the triangles that fall within them, either partially or completely. The key point about this classification structure is that time to access elements is constant, and it can suitably be mapped onto GPU memory.

The amount of cells C to be used can be chosen as . Using this classification, the ray is first compared with the cells and if there's an intersection, it's evaluated with the elements that it contains. The theoretical processing time is , where t' is the average number of elements that fall in one cell.

This classification scheme leads to a dramatic reduction in the number of computed intersections, as the ray is only checked with a reduced set of elements. On the other hand, it is known that this structure is inadequate

when the size of elements, and therefore its density, varies in different regions of the mesh. For the cases studied, the proposed structure was enough.

At the beginning of the optimization a one-time cells classification is performed on the elements of the reference mesh MR. Then, at certain number of changes of the optimized mesh MI, we compute the intersections and DV is got, as proposed in the previous section. Finally, the projection is done, vertexes of MI are updated, elements resorted, and the mesh optimization process continues.

B. Performance Results

For evaluating the algorithm for volume reduction, there were 3 different implementations working on the same mesh and sorting grids structures: a serial, a parallel for a 4-core processor and a GPU implementation running on a Nvidia GTX260 graphics card. Obtained times are shown in Fig. 5. For the test, it was used an initial mesh of 500 triangles, and a reference mesh of 7,000 classified triangles, affecting the first one to a constant re-densification until a given size criterion is reached.


Figure 5: Comparison of the 3-time implementations of the method of reduction of the difference in volume

The results were as expected, with GPU spending barely over 13 ms for the computation of the intersections on a 25,000 triangles mesh. In comparison to the parallel implementation in CPU, GPU shows a speed-up of up to 40 times. It was also noted that the time required to update the structure are much below the time of calculation. Both results show conclusively the feasibility and usefulness of the strategy.

V. PROCESS APPLICATION EXAMPLES

Simple geometries, such as spheres, were used to visually validate the obtained solution by the proposed algorithm. After it, the algorithm was tested with a set of meshes, defining a constant h function with an overall increase of 25% in the averaged size of the original elements.

The volume difference was computed using the method proposed in Del Fresno et al. (2007). Quality is shown as a color map (red poor, green medium and blue high quality respectively).

Appling the algorithm to a sphere generated by an algorithm that models Polar Regions with high concentrations of low-quality elements resulted in a mes where conflict vertexes were removed and with elements of good quality (about 0.91). After it, it was modified an artery mesh previously processed with commercial software with very good initial quality. Modifying the desired size of the elements, increased by 40%, a mesh of similar quality is obtained, with far fewer elements. Figure 7 shows the results after 20 iterations.


Figure. 6. Meshes with the distribution of quality with seudo-color (left) original mesh (right) mesh obtained by Algorithm


Figure 7. (left) artery mesh with 55,000 triangles and average quality of 0.88. (right) simplified version with 31,100 triangles and average quality equal to 0.96.

In the second set of tests, meshes defined by a large number of elements irregularly distributed were used (see Fig. 8 and Fig. 10). The resulting meshes not only preserve the main features, but also correct errors (see Fig.11) and become unstructured meshes (see Fig. 9 and Fig. 11).


Figure 8. (left) Model polygonal 7,000 triangles. (right) optimized version of 6,000 triangles


Figure 9: (left) section of the original gear mesh (right) Section of optimized gear mesh. Note how the regular structure of the original mesh is lost


Figure 10. (left) horse mesh with 96,000 triangles and average quality of 0.76 (right) optimized version with 71,000 triangles and average quality of 0.91 .


Figure 11: (left) section of the original horse mesh (right) Section of optimized horse mesh. Flat elements are removed

Finally, Table 1 summarized the time required to obtain each mesh shown, working with a constant h function that implies a reduction in the number of elements (see the second column in the table 1). The process was run on a PC Quad Core 2 GHz with 4GB of RAM.

Table 1: Algorithm times of the optimization process with GPU & CPU implementations of Volume Difference

In Table 2 we resume the quality obtained with the application of the proposed method.

Table 2: Quality improvement obtained with the method

In the evaluated cases we note that when the original element size distribution was very irregular, as in Figure 8, the volume loss was higher. For this type of mesh it is necessary to have a proper size function according to each position in space.

VI. CONCLUSIONS

In this paper we have proposed an alternative approach for generating surface meshes by localized changes of topology and projection to the original mesh. From irregular meshes there were obtained other ones with higher quality and little loss of volume, even after increasing or reducing the number of triangles of the surface.

The proposed algorithm proves to be robust in complex cases, and the implementation in GPU reduced drastically the calculation time, allowing to solve relatively complex meshes (100,000 elements) in seconds. The resulting meshes are suitable for finite element calculation.

We consider the proposed criterion for evaluating the volume difference very important because it reduces effectively the distance between meshes after they were modified. Even though the implementation does not provide the ideal solution for what we estimate to continue working in this direction.

The use of the GPU as part of the process has shown significant improvements in terms of execution times, so it is intended to use it in other operations such as calculation of Laplacian, selecting elements to delete or split, or in the swap of edges

REFERENCES
1. Baker, T., "Identification and preservation of surface features," 13th Int. Meshing Roundtable, 299-310 (2004).         [ Links ]
2. Baker, T. and P. Pébay, "Analysis of triangle quality measures," Math. Comput., 72, 1817-1839 (2003).         [ Links ]
3. Bondy J.A. and U.S.R. Murty, Graph Theory with Applications, The MacMillan Press Ltd, London and Basingstoke (1976).         [ Links ]
4. Borouchaki, H., P. Laug and P.L. George, "Parametric surface meshing using a combined advancing front generalised Delaunay approach," Int. J. Numer. Meth. Engrg., 49, 233-259 (2000).         [ Links ]
5. Brewer, M., L.F. Diachin, P. Knupp, T. Leurent and D. Melander, "The Mesquite mesh quality improvement toolkit," 12th Int. Meshing Round-table, 239-250 (2003).         [ Links ]
6. Clausse, A., V. Cifuentes and M. Vénere, "Un algoritmo para la simplificación poligonal de modelos topográficos digitales," 33° Jornadas Argentinas de Informática e Investigación Operativa (2004).         [ Links ]
7. Cougny, H.L., "Refinement and coarsening of surface meshes," Engrg. Comput., 14, 214-222 (1998).         [ Links ]
8. Coupez, T., "A mesh improvement method for 3D automatic remeshing," 4th International conference on numerical grid generation in computational fluid dynamics and related fields, 615-626 (1994).         [ Links ]
9. Dari, E., M. Oliveira, A. Salgado, R. Feijóo and M. Vénere, "An Object Oriented Tool for Automatic Surface Mesh Generation using the Advancing Front Technique," Latin American Applied Research, 27, 39-49 (1997).         [ Links ]
10. Del Fresno M., J. D'Amato and M. Vénere, "Un indicador de calidad para evaluar superficies segmentadas," Mecánica Computacional, 27, 3009-3021 (2007).         [ Links ]
11. Freitag, L.A. and P. Plassmann, "Local optimization-based simplicial mesh untangling and improvement," Int. J. Numer. Meth. Engr., 49, 109-125 (2000).         [ Links ]
12. Frey, P. and H. Borouchaki, "Geometric surface mesh optimization," Computing and Visualization in Science, 1, 113-121 (1998).         [ Links ]
13. Frey, P. and H. Borouchaki, "Surface Mesh Quality Evaluation," International Journal for Numerical Methods in Engineering, 45, 101-118 (1999).         [ Links ]
14. Frey, P., "About surface remeshing," 9th International Meshing Roundtable, Sandia National Laboratory, 123-156 (2000).         [ Links ]
15. Garimella, R.V., M.J. Shashkov and P.M. Knupp, "Surface mesh quality optimization using local parametrization," 11th Int. Meshing Roundtable, 41-52 (2002).         [ Links ]
16. Heckbert, P. and M. Garland, "Optimal Triangulation and Quadric-Based Surface Simplification," Journal of Computational Geometry: Theory and Applications, 14, 49-65 (1999).         [ Links ]
17. Jiao, X., "Volume and Feature Preservation in Surface Mesh Optimization," 15th International Meshing Roundtable, Birmingham AL (2006).         [ Links ]
18. Jonas, I., J. Romero, M. del Fresno, J. D´Amato and M. Vénere, "Generación de mallas de elementos finitos para arterias coronarias a partir de imágenes," Mecánica Computacional, 28, 1237-1246 (2009).         [ Links ]
19. Lee, C.K., "On curvature element-size control in metric surface mesh generation," Int. J. Numer. Meth. Engrg., 50, 787-807 (2001).         [ Links ]
20. Ortega Alvarado , L. and M.D. Robles Ortega, "Intersección de Segmentos utilizando la tecnología paralela CUDA," XIII Encuentros De Geometría Computacional, Zaragoza, Spain, 191-198 (2006).         [ Links ]
21. Peiro, J., J. Peraire and K. Morgan, FELISA system reference manual, Part 1: Basic theory, University of Wales Swansea, Report C/R/821/94 (1994).         [ Links ]
22. Plassmann, P. and M. Bern, Mesh generation, Technical Report (2004).         [ Links ]
23. Sánchez Cruz, H. and R.M. Rodríguez Dagnino, "Normalización de una Medida de Similitud para Formas 3D con Representación en Voxels," Computación y Sistemas, 10, 372-387 (2007).         [ Links ]
24. Schroeder, W.J. and M.S. Shephard, "A combined octreeIDelaunay method for fully automatic 3D mesh generation," International Journal for Numerical Methods in Engineering, 29, 37-55 (1990).         [ Links ]
25. Sorkine, O., D. Cohen-Or, Y. Lipman, M. Alexa and H. Seidel, "Laplacian surface editing," Symposium on Geometry processing, 175-184 (2004).         [ Links ]
26. Suárez, J., A. Plaza and G. Carey, "Diagrama geométrico y subdivisión híbrida de triángulos," Revista Internacional de Métodos Numéricos para Cálculo y Diseño en Ingeniería, 25, 61-78 (2009).         [ Links ]
27. Taubin, G., "Estimating the tensor of curvature of a surface from a polyhedral approximation," Proc. of Int. Conf. on Computer Vision, 902-907 (1995).         [ Links ]
28. Vénere, M., "Optimización de la calidad de mallas de elemento finito mediante cambios localizados en la topología," Revista Internacional de Métodos Numéricos, 13, 3-13 (1997).         [ Links ]
29. Wang, D., O. Hassan, K. Morgan and N. Weatherill, "EQSM: An efficient high quality surface grid generation method based on remeshing," Comput. Methods Appl. Mech. Engrg., 195, 5621-5633 (2006).         [ Links ]
30. Zhang, Y., C. Bajaj and G. Xu, "Surface smoothing and quality improvement of quadrilateral/hexahedral meshes with geometric ?ow," 13th Int. Meshing Roundtable (2005).         [ Links ]
31. Zhou, K., J. Huang, J. Snyder, X. Lui, H. Bao, B. Guo, abd H.-Y. Shum, "Large Mesh Deformation Using the Volumetric Graph Laplacian," ACM Transactions on Graphics, 24, 496-503 (2005).         [ Links ]

Received: June 5, 2010
Accepted: November 3, 2010
Recommended by Subject Editor: Pedro Alcântara Pessôa

Creative Commons License Todo o conteúdo deste periódico, exceto onde está identificado, está licenciado sob uma Licença Creative Commons