Dey a.o. (see [Dey1992] ) have described a robust algorithm for Delaunay triangulation in 3D. It is based on a modification of Watson's algorithm. We use here another variant of this algorithm which is also robust, but with slightly different results.
The handicap of our algorithm is that in the resulting grid there may be isolated nodes, if in the input set we have nodes with too small distance. But this is the necessary consequence of the main advantage of our variant: Every tetrahedron in the resulting grid of our algorithm has a positive volume as in exact, as well as in finite precision arithmetics. This is a trivial consequence of our algorithm described later. Only tetrahedra with volume greater than some epsilon will be created. If the epsilon is greater than the possible error of the volume computation, this proves that the volume is positive also in exact arithmetics.
If we have an input set consisting of points with very small distances, it is obviously not possible to create such a grid. We consider this property of our algorithm not as a failure, but as a regularization of the input point set. The isolated nodes may be easily detected in the resulting grid and removed from the data structure if necessary.
The main technique we use to obtain robust results is to replace the exact test with an epsilon-test. If the epsilon is greater than the possible error, we obtain an exact inequality in one of the two cases:
Now let's consider the algorithm in detail. We use the basic scheme of Watson's algorithm: We start with a simple "infinite" grid containing all of the input points. This big start grid allows not to consider the special case of points outside the grid. Then we include iteratively the points of the given input point set into the grid.
To include a new point into the grid, we have at first to find at least one tetrahedron which has to be removed. We use a neighbourhood search algorithm considered later to find it.
Now, beginning with this tetrahedron, we mark neighbour tetrahedra for removal. In the standard algorithm, the only criterion to remove a tetrahedron t is the circumscribing sphere criterion C(t,p): A neighbour of t was already removed and the new point p is inside the circumscribing sphere of t. This criterion we use in the modified form described before.
But we use also another criterion: A neighbour of t was already removed and the volume V(t) of the tetrahedron defined by the face to this neighbour and the new point p is negative. Again, we use the modified form described before. In exact arithmetics such tetrahedra will be already removed by the first criterion. For the modified criteria it may be not so.
We remove all marked tetrahedra and create new tetrahedra for each face between a removed and a non-removed tetrahedron. Caused by the modified volume criterion, the volumes of the new tetrahedra are positive. The algorithm always leads to a topologically correct grid. (Remark that there may be input points which are not part of the resulting grid.) The robustness of the neighbourhood search algorithm will be considered later. The modified volume test shows that each newly created tetrahedron has a positive volume in exact arithmetics. This leads to the consequence that the resulting grid will be topologically correct.
Independently of this general robustness result for our algorithm, we use also some other strategies to minimize the arithmetical errors in our computations:
We have a start element and a target point. We have to find the element containing the target point. In the finite precision arithmetics, we have to find an element so that some epsilon-environment of the element contains the point.
In each step of the algorithm, we have a current element and consider the neighbour elements. In exact arithmetics, let's denote a neighbour element as good if the target point lies on the neighbour's side (relative to the plane between the element and it's neighbour).
For finite precision arithmetics, we have to be more accurate. We use some test to detect on which side of the plane the target point is located. Because the test is not accurate, we compare not with 0, but with some epsilon so that we obtain the following property: If the test detects that the point lies on the neighbours side of the plane it lies on this side also in exact arithmetics. So, the neighbour is good if this test detects it, and the neighbour is named allowed if the reverse test fails (f.e. if the target point lies on the plane). A good neighbour is obviously allowed.
The idea of the algorithm is simple. We travel through the grid by moving from the current element to any good neighbour until we have found an element without any good neighbour.
If there is no allowed neighbour, we have really found the element containing the point. If we have only one allowed neighbour, the target lies in the neighbourhood of the side between them. It is easy to establish a bound for the distance. If the target point lies in the neighbourhood of an edge or a node, there may be two or three allowed neighbours. In this case, the bound for the distance of the target point from the element depends on the interior angles of the element. But, independent of this distance, the algorithm has successfully finished.
The question is if this algorithm always finishes. And really, it is possible to construct some artificial grids where our algorithm can lead to an infinite cycle.
A minor modification of the algorithm can avoid this. We count the number of visits to each element. A neighbour which is allowed, but not good, we include into the consideration with an initial penalty of one. If there are good neighbours, we go to one of the allowed neighbours with the minimal count.
Now we can easily prove that the algorithm will not lead to an infinite loop: The previously defined search algorithm ends in finite time in an element which contains the target point in a small neighbourhood. Consider the line from an arbitrary point inside the element. If this line intersects grid objects of codimension 2 and higher, modify the start point slightly into general position. Travelling along this line we obtain in exact arithmetics a sequence of elements so that each is an allowed neighbour of the previous. If we have an infinite loop, there will be elements with an unbounded number of visits during the infinite loop. Consider the sequence from such an element to the target element. Because the target element has no visits (else the algorithm finishes because there are no good neighbours), there must be an element with unbounded number of visits with an allowed neighbour with bounded number of visits. This is not possible. What can we say about the algorithm without counting? For a non-degenerated Delaunay grid, the search algorithm ends in finite time even without the counting. If we go to a good neighbour in a Delaunay grid, the distance between the centre of the circumscribing hyper-sphere becomes smaller.
In 2D, it is easy to find a (non-Delaunay) grid which allows an infinite loop. Based on such an example, we can construct also a 3D degenerated Delaunay grid with such an infinite loop. We simply have to consider the grid consisting of the north pole of a big sphere and the projections of this 2D grid from the north pole on this sphere. Because all nodes of this grid are located on the same sphere, every grid will be a degenerated Delaunay grid. Especially the grid we obtain by connecting the north pole with the triangles of the given 2D grid.
Thus, if we have Delaunay grids, there is de-facto no danger that the search will lead to an infinite loop. Only for applications which require very high security standards, the variant with counting seems necessary.
The Delaunay grid generation is the most time-consuming part in our implementation. So, let's consider at first the time dependence problem and our way to solve it. The efficiency of the algorithm depends on different things:
The remaining problem connected with the Delaunay algorithm is that this algorithm does not consider boundaries. So, the boundary may be intersected by elements. The resulting grid is incorrect. We have the following strategies to solve this problem:
In the case of isotropic refinement, if the spider is not dangerous, the last method seems to be universal.
The problems described before for boundary faces intersected by a grid edge we have also for boundary lines intersected by grid faces. This case is more complicate to handle, because it will be not so easy to detect or avoid it in the previous steps, it is also not so easy to detect it in the Delaunay grid, and it is often more complicate to rebuild.