TRIANGULAR MESH DECIMATION AND UNDECIMATION FOR ENGINEERING DATA MODELLING Sebastian Krivograd\ Borut Žalik\ Franc Novak^ ^University of Maribor, Faculty of Electrical Engineering and Computer Science, Maribor, Slovenia ^Jožef Stefan Institute, Ljubljana, Slovenia Key words; engineering data modelling, triangular mesties, mesh decimation and undecimation, hash table. Abstract: Triangular mesh decimation is the process that uses local operations on geometry and topology to reduce the number of triangles in a triangle mesh. Triangular meshes are used in many engineering applications where simple interpolation of discrete data replaces continuous and complex model of reality. Furthermore, triangular meshes are standard input to numerical analysis tools based on Finite Element fvlethod. fvlanipulation with large triangular meshes is a bottleneck in engineering applications hence appropriate simplifications are needed. Apart from intuitive manual techniques, mesh decimation process is an attractive alternative providing optimal computer based solutions. The paper presents a fast algorithm for decimation of triangular meshes using vertex elimination approach. To speed-up the search for the vertex to be removed, a hash table is applied. Presented solution runs in linear time and is suitable for different applications in practice. Its usefulness is increased by an introduction of an undecimation, i.e. a reverse process restoring gradually the initial triangular mesh. An illustrative example from the analysis of a power line electric field is given. Modeliranje inženirskih podatkov s poenostavljanjem in rekonstrukcijo trikotniških mrež Ključne besede: modeliranje inženirskih podatkov, trikotniške mreže, poenostavljanje in rekonstrukcija trikotniških mrež, sekljalna tabela. Izvleček: Trikotniške mreže pogosto uporabljamo v inženirskih aplikacijah, predvsem ko želimo interpolirati diskretne odbirke kompleksnih zveznih procesov. Trikotniške mreže so prav tako standarden vhod za različne numerične analize, ki temeljijo na metodi končnih elementov. Manipulacija z zelo velikimi trikotniškimi mrežami pa predstavlja ozko grlo pri inženirskih aplikacijah, zato iščemo enostavnejše a še zmeraj dovolj verne trikotniške mreže. Procesu, ko z uporabo lokalnih operacij nad geometrijo in topologijo podane trikotniške mreže zmanjšamo število vozlišč in trikotnikov, pravimo poenostavljanje trikotniške mreže. Postopek iskanja najprimernejše trikotniške mreže je običajno prepuščen uporabniku in temelji na njegovi intuiciji, zato predstavlja računalniško podprt proces poenostavljanje zanimivo alternativo. V članku predstavimo učinkovit algoritem za poenostavljanje trikotniških mrež, ki temelji na odstranjevanju oglišč. Da bi pohitrili odločitev, katero izmed oglišč je najprimernejšo za odstranitev, uporabimo sekljalno tabelo. Predstavljena rešitev deluje v linearnem času in je primerna za različne aplikacije v praksi, predvsem tam, kjer potrebujemo hiter odziv sistema. Uporabnost pristopa povečamo z možnostjo postopnega vračanja odstranjenih oglišč - z rekonstrukcijo. Delovanje algoritma prikažemo z ilustrativnim primerom poenostavljanja trikotniške mreže električnega polja daljnovoda. 1 Introduction Althougii natural phenomena are continuous, engineers normally measure their values only at some discrete measurement positions. The values at other positions are then calculated by interpolation. In the applications where scalar values (e.g., temperature, see level, electric field, tension) are measured in a plane, the most suitable interpolation results in a triangular mesh /1/. Of course, the denser the mesh, the better interpolation can be constructed. Unfortunately, a huge number of measured points may cause problems in manipulation with the corresponding triangular meshes resulting in slow response time and considerable computer memory requirement. This is especially critical, when atriangulation mesh is used as an input into numerical analysis based on FEM (Finite Element Method) /2/. Furthermore, by the widespread use of the internet, large triangular meshes require long transfer time between collaborating parties. Because ofthat, triangulation meshes are frequently simplified to made an acceptable compromise between the accuracy and the system limitations. The main idea stems from the fact that a triangular mesh can be simplified in regions with small or no variation of the scalar values. This task is usually performed manually using experience and intuition of a user. An alternative is the use of the so-called mesh decimation algorithms, which simplify the triangular mesh automatically. Triangular mesh decimation approaches, developed so far, can be classified according to the elements they are taking from the mesh (i.e., vertices, edges, or triangles) /3, 4/: Vertex decimation methods are the most frequently used and are based on Schroeder simplification algorithm /5/. The vertices are evaluated, and they are incrementally removed from the mesh according to their importance. Various techniques have been proposed, and they differ on how vertices are evaluated and what type of triangulation is required (see /3/ for an overview). Edge decimation methods eliminate edges, which have been evaluated already /6/. The edge being removed is replaced by a vertex. Triangles, which degenerate to edges, are removed. One of the best edge decimation methods is based on quadric error metrics /7/. Triangle decimation methods eliminate one or more triangles. Although theoretically feasible, practical approaches using this possibility have not been reported. In this paper we present an algorithm that combines two mesh decimation approaches. Franc and Skala used a hash table in a parallel environment for speeding-up the search of the most suitable vertex to be removed /8/. They combined the vertex and edge removal in the following way. At first the most suitable vertex is selected, and then among the edges incident to that vertex, the shortest one is contracted. Our algorithm uses the pure vertex decimation similar to the one proposed by Schroeder /5/, but using the hash table as the acceleration technique. The heuristic approach for creation of a hash table for engineering applications have been also developed. Beside that, our algorithm is equipped by an undecimation feature, i.e. by an incremental reconstruction possibility of the original mesh. The algorithm is suitable for engineering data modelling. Low time and space complexity make it even candidate for embedded system applications. 2 The decimation algorithm The proposed algorithm for triangular mesh decimation is briefly sketched as follows: 1. Evaluate all vertices according to a chosen evaluation criterion and arrange them into a hash table. 2. Select the most suitable vertex using the hash table (for example, vertex v, in Figure 1 a). 3. Remove the vertex from the triangular mesh. 4. Remove all triangles incident on the removed vertex (Figure lb). 5. Triangulate the area from where the triangles have been removed (see Figure 1c). 6. Re-evaluate vertices incident to the removed vertex (vertices v/, vr, vi, vm, vn in Figure 1 c). 7. Return to step 2 until the termination criterion is met. triangles with small changes of scalar values in their vertices can be reduced without much spoiling the presentation. Figure 1: Vertex decimation 2.1 Evaluation of vertices Before the decimation process starts, the vertices have to be evaluated. Let us take a look at Figure 2 where the Figure 2: A triangular mesh The evaluation of vertices is done by investigating the neighbourhood of the vertex under consideration. Different criteria can be applied. Let us mention just two of them: Vector Vij connecting the examined vertex Vj and its neighbouring vertex Vj is formed. The angle between this vector and xy plane is calculated. The average value of all angles defined by vertex v, is used as the evaluation value ev,. The average difference of scalar values between the examined vertex v/ and the neighbouring vertices is used as the evaluation value evj. Better results are obtained by the first criterion. That can be confirmed by observing Figure 3a and Figure 3b, where the scalar value against the two neighbours is the same on both figures. If the first evaluation criterion is applied, the situation in Figure 3a gives bigger evaluation value ev, than in Figure 3b. Applying the second criterion gives the same evaluation value ev;. In practice, however, the criterion with average difference of scalar values is normally used. a) b) Figure 3: Evaluation of vertices 2.2 Selection of a vertex to be removed The strategy is to remove vertices that cause the smallest change in data representation. Therefore, the vertex with the smallest evaluation value ew is selected and removed from the mesh. The evaluation values ev, of the neighbouring vertices are changed and must be estimated again. In the next iteration, the algorithm searches for the next vertex with the smallest ev/. It can be selected easily by walking through the set of the remaining vertices, and selecting the one with the smallest ev;. Unfortunately, this method works in O(n^) time and considerably slows down the algorithm. The second possibility, sorting at first all vertices according to evi and adjusting their position in the sorted array according to the changed ew works in expected 0(n log n) time. The constant expected time complexity can be achieved by introducing a hash-table /8/. However, in this case, the condition of selecting the vertex with the smallest ev; has to be slightly relaxed. The vertices are organised in the hash table according to their evaluation values evi. Figure 4 schematically shows the structure of the hash table. Vertices in each interval are stored in a FIFO queue. application specific recommendation. For FEM analyses it is desired, for example, that the rate of the triangle sides does not fall below 1:1:10. If that happens, the considered vertex is not removed. Figure 4: Arranging vertices into hash table The hash table is usually organised according to the expected distribution of the input data. In our application, the heuristics for linear, quadratic and exponential expected data distributions are prepared. It is important to note that during the decimation process new evaluation values of evi can become greater than maximal evaluation value determined before the hash table has been formed. Therefore, we have to add additional entry to the hash table accepting such possible cases. Having the hash table, the next vertex to be removed from the triangular mesh is now obtained easily. The algorithm always selects the first vertex from the lowest non-empty FIFO queue and removes it. The neighbouring vertices of the removed vertex are evaluated again and inserted in the corresponding interval at the end of the FIFO queues. In that way it is prevented to perform decimation only locally. The hash table assures constant time complexity of this part of the algorithm. By storing a list of neighbouring vertices at each vertex of the mesh, the neighbouring vertices are accessible without any search (see Figure 5). As each vertex has I neighbouring vertices, in general l«n, the update of the estimation values is realised in 0(i) -0(1) time. S; i Figure 5: Direct access of the vertex neighbours It may be desired that the border vertices of the region are eliminated from the decimation process. In this case, they are not inserted into the hash table. Similarly, we can test the shape of the newly created triangles according to the 2.3 Triangulation of polygon area After removing a vertex from the mesh, all the triangles incident to it are eliminated (shaded part in Figure 1). The empty region has to be filled by the new triangles. Franc and Skala applied here a clever solution by selecting the shortest edge from the removed vertex to their neighbours. The shortest edge is contracted pulling all the edges defined by the removed vertex to the opposite vertex of the shortest edge /4/. However, this elegant method works only when the obtained gap forms a convex polygon which is not always the case. Therefore we applied in our solution a classical polygon triangulation algorithm. There are many ways how a polygon can be triangulated: algorithms based on diagonal insertion, restricted Delaunay algorithms, and the algorithms using Steiner points (see /9/ for an overview). In our approach, the well-known ear-cut-ting algorithm proposed by EIGindy et. al is used /10/. 3 Undecimation Returning the removed vertices into the mesh in the reverse order of their elimination is an extremely useful feature in practice, giving the user the opportunity to experiment with the mesh. The user may return step by step only a few vertices instead of processing the whole set of vertices again and trying different termination criteria. This feature (denoted as undecimation) can be realised easily and very efficiently by the proper data structure as described below. At the beginning we have an array of vertices and an array of triangles. The position of vertices remains the same, they are just pulled-out from the mesh. The removed vertices are marked by flags. When a vertex is removed, triangles incident to it are removed, too. This is indicated in the triangle array by a flag. There are always less new triangles than original and they occupy the memory locations of the old ones. Some of the locations (at least one) are marked as empty. Figure 6a shows the state of the data structure before and Figure 6b after removing vertex vo in the example shown in Figure 1. The undecimation process requires the knowledge how the process of decimation was executed and what changes in the triangular mesh occured at each step. The easiest solution would be to store a topology of each obtained mesh. This would involve file operation and would slow down the whole process. However, by the proper organisation of data, the shape of the mesh could be easily restored by a tolerable amount of additional memory. Figure 7 explains our solution. Two additional one-way linked lists are introduced at each removed vertex. The first list stores the indices of the removed triangles. It contains only two TABLEJ)FJ'ERTICES -]- y,, V, TABLE OF VERTICES 'i 4 4 I. t, 1, ■ r r TABLEJ)F TRIANGLES f) TABLEJ)FjrRIANGLES b) Figure 6: The state of the data structure before (a) and after (b) removing a vertex indices, because the third one is the removed vertex itself. The second list stores the indices of the new triangles. The process of undecimation is now extremely easy. The vertex, which is going to be returned to the mesh, set-ups the flag indicating that it belongs to the mesh again. Triangles, which have been added by the polygon triangulation process, are removed and the old triangles are restored using the information from the list of the removed triangles. TABLE OF VERTICES Vo v, v„„ v.,,, K'r, VrW List_ Of_ Removed_ Triangles List_()f_A(lded_Ti'iangles Figure 7; Additional lists by removed vertex 4 Results 4.1 Theoretical time and space complexity The proposed algorithm for mesh decimation consists of the steps with the following complexity: evaluation of vertices is done in 0(n), where n is the number of the input vertices, removal of a vertex is realised in constant time 0(1), triangulation of polygon using the ear-cutting is performed in 0(lh, where // is the number of neighbouring vertices of the removed vertex vi. However, // « n, and therefore this step can be considered as being done in constant time regarding n. re-evaluation of the neighbouring vertices of the removed vertex is done in constant time. If k is the number of all vertices that are removed during the decimation process and k