Computation of the Delaunay Grid

Algorithms for the computation of Delaunay grids are well-known and widely distributed. There are a lot of papers (see [Bowyer1981] , [Watson1981] , [Maus1984] , [Field1986] , [Juenger1989] , [Dwyer1991] , [Mehlhorn1991] , [Dey1992] ) about the computation of the Delaunay grid for a given set of points. Usually they are based on Watson's algorithm ( [Bowyer1981] , [Watson1981] ). The main problems of this algorithm are the time efficiency, the robustness against rounding errors and the boundary handling.

Robustness of the Delaunay Algorithm

Dey a.o. (see [Dey1992] ) have described a robust algorithm for Delaunay triangulation in 3D. It is based on a modification of Watson's algorithm. We use here another variant of this algorithm which is also robust, but with slightly different results.

The handicap of our algorithm is that in the resulting grid there may be isolated nodes, if in the input set we have nodes with too small distance. But this is the necessary consequence of the main advantage of our variant: Every tetrahedron in the resulting grid of our algorithm has a positive volume as in exact, as well as in finite precision arithmetics. This is a trivial consequence of our algorithm described later. Only tetrahedra with volume greater than some epsilon will be created. If the epsilon is greater than the possible error of the volume computation, this proves that the volume is positive also in exact arithmetics.

If we have an input set consisting of points with very small distances, it is obviously not possible to create such a grid. We consider this property of our algorithm not as a failure, but as a regularization of the input point set. The isolated nodes may be easily detected in the resulting grid and removed from the data structure if necessary.

The main technique we use to obtain robust results is to replace the exact test with an epsilon-test. If the epsilon is greater than the possible error, we obtain an exact inequality in one of the two cases:

  1. The circumscribing sphere inequality C(t,p) allows to detect if a given point p is inside the circumscribing sphere of a tetrahedron t. We use the modified variant of this test so that for every point which is exactly outside the sphere we obtain the correct answer.
  2. The volume inequality V(t) tests if the oriented volume of a tetrahedron t is positive. We use the modified variant so that a volume which is really negative will be detected as negative.

Now let's consider the algorithm in detail. We use the basic scheme of Watson's algorithm: We start with a simple "infinite" grid containing all of the input points. This big start grid allows not to consider the special case of points outside the grid. Then we include iteratively the points of the given input point set into the grid.

To include a new point into the grid, we have at first to find at least one tetrahedron which has to be removed. We use a neighbourhood search algorithm considered later to find it.

Now, beginning with this tetrahedron, we mark neighbour tetrahedra for removal. In the standard algorithm, the only criterion to remove a tetrahedron t is the circumscribing sphere criterion C(t,p): A neighbour of t was already removed and the new point p is inside the circumscribing sphere of t. This criterion we use in the modified form described before.

But we use also another criterion: A neighbour of t was already removed and the volume V(t) of the tetrahedron defined by the face to this neighbour and the new point p is negative. Again, we use the modified form described before. In exact arithmetics such tetrahedra will be already removed by the first criterion. For the modified criteria it may be not so.

We remove all marked tetrahedra and create new tetrahedra for each face between a removed and a non-removed tetrahedron. Caused by the modified volume criterion, the volumes of the new tetrahedra are positive. The algorithm always leads to a topologically correct grid. (Remark that there may be input points which are not part of the resulting grid.) The robustness of the neighbourhood search algorithm will be considered later. The modified volume test shows that each newly created tetrahedron has a positive volume in exact arithmetics. This leads to the consequence that the resulting grid will be topologically correct.

Independently of this general robustness result for our algorithm, we use also some other strategies to minimize the arithmetical errors in our computations:

Using exact results if possible.
If a node is located on an edge, for two of the three coordinates we know the exact value. But the direction-independent code will compute these values, so there may be slightly different results. That's why we explicitly overwrite these values with the exact results. So for the most typical degenerate cases we have (rectangles) we minimize the error (often we obtain exact zero results).
A special formula for circumsphere test.
For this central test of the Delaunay algorithm we don't compute the centre and later the distance of the point to this centre, but we compute the position on the sphere opposite to the first node and use the scalar product test. This will be much more accurate in the neighbourhood of the first node. This allows to handle one of the typical degenerate situations in our algorithm: Tetrahedra which contain the artificial "infinite points". For these tetrahedra, the distances between the "normal" points are very small compared with the distances to the "infinite" points. The algorithm guarantees that after the first step the first node will be always a normal node.
High intermediate precision.
The idea is to use for intermediate computations higher precision than for representing the data. To make this machine-independent we use two different floating point types ibgFloat and ibgDouble which can be easily redefined.

Neighbourhood Search Algorithm

Let's describe here a little bit more detailed the neighbourhood search algorithm. This algorithm we use in different parts of our program: For the Delaunay grid generation and the search of the element containing the given point if we want to use a grid as input for the contravariant geometry description.

We have a start element and a target point. We have to find the element containing the target point. In the finite precision arithmetics, we have to find an element so that some epsilon-environment of the element contains the point.

In each step of the algorithm, we have a current element and consider the neighbour elements. In exact arithmetics, let's denote a neighbour element as good if the target point lies on the neighbour's side (relative to the plane between the element and it's neighbour).

For finite precision arithmetics, we have to be more accurate. We use some test to detect on which side of the plane the target point is located. Because the test is not accurate, we compare not with 0, but with some epsilon so that we obtain the following property: If the test detects that the point lies on the neighbours side of the plane it lies on this side also in exact arithmetics. So, the neighbour is good if this test detects it, and the neighbour is named allowed if the reverse test fails (f.e. if the target point lies on the plane). A good neighbour is obviously allowed.

The idea of the algorithm is simple. We travel through the grid by moving from the current element to any good neighbour until we have found an element without any good neighbour.

If there is no allowed neighbour, we have really found the element containing the point. If we have only one allowed neighbour, the target lies in the neighbourhood of the side between them. It is easy to establish a bound for the distance. If the target point lies in the neighbourhood of an edge or a node, there may be two or three allowed neighbours. In this case, the bound for the distance of the target point from the element depends on the interior angles of the element. But, independent of this distance, the algorithm has successfully finished.

The question is if this algorithm always finishes. And really, it is possible to construct some artificial grids where our algorithm can lead to an infinite cycle.

A minor modification of the algorithm can avoid this. We count the number of visits to each element. A neighbour which is allowed, but not good, we include into the consideration with an initial penalty of one. If there are good neighbours, we go to one of the allowed neighbours with the minimal count.

Now we can easily prove that the algorithm will not lead to an infinite loop: The previously defined search algorithm ends in finite time in an element which contains the target point in a small neighbourhood. Consider the line from an arbitrary point inside the element. If this line intersects grid objects of codimension 2 and higher, modify the start point slightly into general position. Travelling along this line we obtain in exact arithmetics a sequence of elements so that each is an allowed neighbour of the previous. If we have an infinite loop, there will be elements with an unbounded number of visits during the infinite loop. Consider the sequence from such an element to the target element. Because the target element has no visits (else the algorithm finishes because there are no good neighbours), there must be an element with unbounded number of visits with an allowed neighbour with bounded number of visits. This is not possible. What can we say about the algorithm without counting? For a non-degenerated Delaunay grid, the search algorithm ends in finite time even without the counting. If we go to a good neighbour in a Delaunay grid, the distance between the centre of the circumscribing hyper-sphere becomes smaller.

In 2D, it is easy to find a (non-Delaunay) grid which allows an infinite loop. Based on such an example, we can construct also a 3D degenerated Delaunay grid with such an infinite loop. We simply have to consider the grid consisting of the north pole of a big sphere and the projections of this 2D grid from the north pole on this sphere. Because all nodes of this grid are located on the same sphere, every grid will be a degenerated Delaunay grid. Especially the grid we obtain by connecting the north pole with the triangles of the given 2D grid.

Thus, if we have Delaunay grids, there is de-facto no danger that the search will lead to an infinite loop. Only for applications which require very high security standards, the variant with counting seems necessary.

Time Requirements of the Algorithm

The Delaunay grid generation is the most time-consuming part in our implementation. So, let's consider at first the time dependence problem and our way to solve it. The efficiency of the algorithm depends on different things:

On the point set:
In the worst case, the triangulation of n points contains O(n^2) tetrahedrons. That's why, the general efficiency cannot be better. But in our algorithm it is possible to avoid bad situations (like a spider) in the step of point set generation so that the number of tetrahedrons per grid point is bounded.
On the point order:
It is possible, that there is no spider in the resulting grid, but they occur in intermediate steps. This intermediate creation and removal of tetrahedra may require a lot of time. So, it is useful to have this in mind implementing the point set generation. In our algorithm, the order of point inclusion is the order of point creation in the steps before. We try to avoid this effect in the organization of the refinement step.
On the search algorithm:
The standard neighbourhood search algorithm requires O(n^{1 + {1 \over n}}) for the search. Using tree search techniques it is possible to reduce this time to O(n \log n). In our case, we can use additional information from the octree data structure --- the information about the nearest previously inserted point. This allows search in linear time.
If our attempts to avoid the second effect are successful, this leads in the result to an algorithm with linear behaviour. The empirical results coincide with this dependence. Sometimes in numerical experiments the algorithm looks even better then linear. The reason seems to be that higher refinement leads in these cases to a more regular grid.

Boundary Correction

The remaining problem connected with the Delaunay algorithm is that this algorithm does not consider boundaries. So, the boundary may be intersected by elements. The resulting grid is incorrect. We have the following strategies to solve this problem:

We see that the boundary correction is a difficult problem. The best way to solve this problem is the first. But there are situations where the first method requires to a lot of additional nodes, or it has failed because of a constellation we have not considered. In such cases, the other methods have to be used as the "emergency exit" to avoid a failure.

In the case of isotropic refinement, if the spider is not dangerous, the last method seems to be universal.

The problems described before for boundary faces intersected by a grid edge we have also for boundary lines intersected by grid faces. This case is more complicate to handle, because it will be not so easy to detect or avoid it in the previous steps, it is also not so easy to detect it in the Delaunay grid, and it is often more complicate to rebuild.