Algorithms for Contravariant Geometry Descriptions

In this part we consider different algorithms which allow to define cogeometries.

Simplex Subdivision

We have already found that the cogeometry may be defined by their results for arbitrarily small simplices. So, it makes sense to consider also algorithms which do not work correctly for big simplices. Assume we have such an algorithm and some epsilon so that the algorithm may be considered as correct for smaller simplices.

The subdivision algorithm allows to obtain the correct result also for greater simplices. It works so:

Is it possible to get an infinite loop in this algorithm? In the general case, there will be only a finite number of intersections of the related boundaries with the inner boundaries between the sub-simplices. So, an infinite loop may be only a cycle between a finite set of flags. If the initial algorithm is really symmetric, this is not possible. Thus, usually there will be no infinite loops, but degenerated situations and errors of the initial algorithm may lead to such infinite loops.

In a typical regular situation, the number of intersections with inner boundaries will be small. Thus, if the number of steps will be great, an infinite loop may be assumed. Thus, it seems useful to break the loop after a maximal number of steps which may be not very large. The last (k-1)-flag after a break lies inside the simplex. The value may be returned as a k-flag on some artificial "error boundary". This allows to continue the computation.

Another idea for the error handling is to restart the computation with a temporary smaller value of epsilon. This seems useful, because an incorrect value for epsilon seems to be the most probable error.

The Default Function

Assume we have defined the first (k-1) functions f(i). Then we can define a default function f(k). This function subdivides the (k-1)-boundary into parts with identical higher-dimensional neighbourhood. The resulting k-boundary consists of a unique default k-segment. Possible infinite loops for the search over the simplex sides may be handled in analogy to the previous algorithm.

In the case of multiple intersections of the (k-1)-segment with the border of the simplex, it is possible that the algorithm finds not the first intersection of this segment with the border. Such a result is incorrect. But this error usually occurs only for big simplices. That's why further subdivision may be used to avoid such errors.

This default function f(k) returns only one unique "default segment", it does not allow to transfer nontrivial attributes of the k-boundary. The default function f(k+1) will not lead to any subdivision of this default k-segment, that means it does not create any (k+1)-flag. So, there will be no possible input values for the function f(k+2), the geometry has the codimension k. Thus, the implementation of the default functions f(k) and f(k+1) allows to use the first functions f(i) for i < k of a cogeometry as a correct, complete cogeometry of codimension k.

This leads to a powerful "fast prototyping" strategy. In the first step, only the function f(0) has to be implemented. Later, the other functions f(k) may be implemented step by step. Every step leads to a better, more accurate and more powerful description of the resulting geometry, but even the first step leads to a complete geometry description which may be used as a prototype of the correct cogeometry.

The algorithm may be slightly modified for the case of f(2). Even if there is no subdivision of the boundary face into parts, boundary faces with different pairs of neighbour regions may be considered as different. So, a geometry with nontrivial boundary lines (that means with codimension two) may be created even if only f(0) is defined.

The other flag points in the algorithm are in the simplex. That means, they usually will be not orthogonal to the boundary. This algorithm was one of the reasons to use flags instead of intersection points as in the first variant and the implementation in IBGD. The (k-1)-part of the initial flag is necessary in this algorithm.

The Induced Cogeometry

As we have already mentioned, if we have a smooth mapping f: X \to Y and a cogeometry G(Y) on Y, we can define on X an induced cogeometry G(X). We have remarked, that it is a failure of the standard geometry description that there is no general algorithm which allows to create this induced geometry. Let's consider now how the cogeometry allows to describe the induced geometry. Let's consider at first the case of an affine mapping f between affine spaces. We have to define the function f(k) on X for given functions f(i) on Y. For a given input flag and the related side or simplex, their image also defines a correct, non-degenerated pair flag - side or simplex, because every flag obtained by the following algorithm will be created so that this property is fulfilled. We compute the resulting flag on Y. The pre-image of the resulting point intersects the related side or the initial simplex in a single point, which will be used as the position of the resulting flag. The other flag points may be defined in the same way (nonorthogonal variant) or so that the related directions d(i) are orthogonal to the pre-image of the position of the flag in Y (orthogonal variant).

To compute the related positions it is in the nonorthogonal case not necessary to compute something about the pre-image. This may be dangerous if a rounding error moves the point out of the image. We can simply use the barycentric coordinates of the resulting points in the image simplex in Y to compute their position in the original simplex in X.

Thus, we have an easy, straightforward algorithm for the affine case. For the smooth case, we simply can use subdivision into smaller simplices until the simplices are small enough to allow the approximation by the affine algorithm.

We can modify the algorithm to make it faster: Instead of subdividing the simplex into equal parts, we can use the barycentric coordinates of the result flag in Y to subdivide the simplex.

A variant of the algorithm may be used for piecewise affine mappings. We explicitly subdivide the simplex into pieces so that the mapping is affine on every piece, and use the affine variant on each piece. This seems to be useful for mappings with a small number of different parts, for example for the continuation of a geometry defined in a cube to the outside.

The Intersection of Geometries

The intersection of geometries is another example of a natural operation which is hard to implement for the standard geometry description. let's look how the intersection may be implemented in the contravariant geometry description.

In general, a k-segment S_k of the intersection will be described by an i-segment S_i of the first and an (k-i)-segment S_k-i of the other cogeometry. The codimension of the resulting geometry is the sum of the codimensions of the two cogeometries.

The general scheme will be analogical to the case of an induced geometry. We can use subdivision until the simplices are small enough to be approximated by the affine situation. To consider the affine situation for some fixed pair (k, i) is straightforward but complicate to implement.

For the first functions the implementation is much more trivial:

There will be two useful variants of intersection:

Partial Intersection

In the case of partial intersection, only one "basic" segment of the first cogeometry and it's boundary will be subdivided. On the other hand, information about the second cogeometry will be used only in this segment. Thus, the second cogeometry may not be defined outside this segment. More accurate, the input simplex and the output flag may be partially outside the segment, but if the input flag is not in (or on the boundary of) the segment the function f(k) may be not defined.

Remark that the basic segment is not necessarily a region. It may be also a boundary segment of arbitrary codimension.

Characteristic Functions

It simplifies especially the higher order functions of the intersection if the codimension of the second cogeometry is fixed. An useful example is the cogeometry induced by a smooth real-valued function and the simple cogeometry on the real line consisting of two regions (> 0 and < 0) and the single boundary point 0. The resulting induced cogeometry on X has the codimension 1. Thus, as for the general, as for the partial intersection with this cogeometry the implementation will be simple. This method to modify the cogeometry by intersection we denote as modification by a characteristic function. It is a simple but powerful possibility to create complicate cogeometries in a few number of simple steps.

Graphical Input

An interesting possibility of the contravariant geometry description is the usage of graphical input. There will be different possibilities to use such input.

The simplest possibility is to define a 2D cogeometry by a picture using the principle one color --- one region. This allows to use paint programs to define 2D cogeometries. How to implement this is straightforward. We only have to implement the region function.

Another possibility is to use the color value of the pixel to encode one, two or three functions on the 2D space. These functions may be used or interpreted in a different way. For example, they may be used to define a mapping into the three-dimensional "color space", and any cogeometry in this color space may be used to define an induced 2D cogeometry. Another example is to use the color value of a geographical map to define the elevation. Then this elevation may be used to define the 3D cogeometry of this region.

Using a Simplicial Grid

Consider now the algorithms which may be used if the geometry will be described by a simplicial grid. That means, we assume that for every codimension the segments are defined as the union of simplices of the related dimension.

If we exclude the problem how to find the necessary simplices in the data structure, the algorithm is straightforward. In the general case, we can really compute the intersection line and travel along this line to find the correct continuation as defined by the formal definition also for big simplices. A lot of code to handle degenerate cases will be necessary in the implementation, but our general strategy to handle these cases is sufficient. Thus, the central problem for the algorithm is to find the simplices we need, especially if the number of simplices is very large.

For the functions f(k) with k > 0 we have always an input flag which position in the simplex data structure was already found at the call which has created this flag as output. To avoid a double search, it will be enough to save the related information into the flag data. The standard mechanism to transfer attributes may be used for this purpose. Remark, that this information can also help to handle degenerated cases. For a flag on the boundary between different simplices we use on of these simplices to define the segment of the flag. If we have saved the information about this simplex, we have saved also the information how we have managed this degeneration.

If we want to travel along the intersection line, we have to switch from a given simplex to it's neighbour. Thus, to obtain an efficient implementation the related neighbourhood information must be available very fast. In this case, we have no problem to get an algorithm which is efficient enough for the functions f(k) with k > 0.

Only for the case of the region function f(0) we have no input flag which may be used for the start of the neighbourhood search. In principle, a good starting point will be one of the nearest previously found points. But the information about this is not available in the function f(0) itself. But often it will be available in the function which calls f(0). That's why, to allow the usage of this information, we have to modify the interface. We must allow the calling function to transfer the pointer to a good starting point so that it may be used by f(0).

This may be easily realized in the C++ interface. We simply include some pointer to a starting point into the cogeometry class and allow to modify this value. It will be initialized with the null pointer, and after each call of f(0) it will be the argument of this call. Thus, if the calling function does not use this possibility, the position of the last point will be used as the starting point. This seems to be an optimal solution of this problem.

If we have a good starting point, the same algorithm as used for f(1) may be used also for f(0). The only difference is that in the case of f(1) we return immediately if we find a boundary intersection, in the case of f(0) we have to travel form the start to the end of the edge.

Using a Boundary Grid

The most usual way to describe a geometry --- the boundary grid --- is similar to the previous case, but does not contain simplices of codimension 0. Instead, we have the information about the left and the right region for segments of codimension 1. The higher order functions do not cause any problems, because the algorithm used for the complete grid may be used also if we do not have a grid in the regions. But to obtain an efficient algorithm for f(0) and f(1) we need another algorithm.

Thus, we have to find the boundary simplex which has the first intersection with a given edge. For a small number of simplices, we can use a simple loop over all simplices. To make the algorithm fast enough for a large number of simplices, we have to create an additional data structure which helps us to find this simplex. The usual way is to use a search tree. Another possibility is to add a grid in the regions, e.g. using Delaunay techniques.

Another problem is the degeneration. If the edge intersects approximately the common border of different boundary face simplices, we have to consider all these simplices together and cannot use simply the first intersection, because rounding errors may lead to an incorrect order of these intersections on the edge.

Thus, in principle it is possible to implement a fast algorithm for a boundary grid, but this type of input may be considered as the "worst case" for the cogeometry.

Other Possibilities

There will be also other natural possibilities to create and modify cogeometries. They usually may be easily implemented, at least for the region function f(0). For example: The real power of the contravariant geometry description is that it allows to combine all these separate methods. A 3D cogeometry defined by a grid may be intersected with a 3D cogeometry induced by some mapping. Attributes may be used to define mappings. To switch between different dimensions is trivial. The elevation profile of a region may be used to define the 3D surface geometry of this region. This may be combined with other maps of this region to subdivide the surface into parts.

Time Requirements

Let's consider now the time required by the different algorithms we have considered before. For simplicity, we consider only the case of isotropic simplices. The general algorithm we have to consider consists of the simplex subdivision algorithm and the call of some of the special algorithms for the smallest simplices. The time necessary for a single call of f(k) for k > 0 depends obviously of the following data:
0
the required accuracy of boundary computation.
1
the dimension of the input simplex.
2
the dimension of a simplex which is small enough to call the special algorithm.
If 0 = 2 the time for a call of the special algorithm may be estimated by a constant. More accurate, this has to be considered as a natural condition for a fast algorithm. Really, we simply have to solve a single question with a finite number of possible answers: Is there an intersection inside or which side of the simplex contains the continuation. Then we can simply use the middle point or the middle of the side as the position of the output flag. If this requires too much time, we obviously use a bad algorithm.

In principle, this time depends on the size of the data of the geometry description. E.g. for the case of the boundary grid with search tree we obtain a logarithmic dependence on the size of the boundary grid. If the typical size of the simplex 3 in a volume or boundary grid is smaller than 0 (that means we use an input grid which is too fine) we obtain an additional factor 0 / 3.

A regular geometry is a geometry so that the length of the intersection line of the segment with the simplex may be estimated by1 and the number of intersections of the k-segments with a k-simplex may be estimated by a constant. In a regular geometry, the time for a call of f(k) may be estimated by 1 / 0. At first, we consider the variant of the algorithm with 2 = 0. This can be done, because usually a variant with smaller 2 is more accurate and greater values of 2 will be used only for the reason of time efficiency.

The time for the call of the lowest level simplices is estimated by the previous lemma. The time required for creation and subdivision of each simplex may be estimated by a constant. The number of steps in the loop in each simplex may be estimated by the number of intersections with the internal planes which is bounded because the geometry is regular. That's why the time for a call of each simplex excluding the call time for the sub-simplices may be estimated by a constant.

The length of the intersection line may be estimated by1. For each refinement level k we subdivide this line into 2^k parts of length k there k = 1 / 2^k is the size of a simplex of this level. The number of simplices containing such a part is fixed by the dimension, and because of the regularity the number of simplex calls on this level which is necessary to travel along this part may be estimated by a constant. Thus, the number of simplex calls in level k can be estimated by 2^k. The sum over all levels from 0 to l = \log_2 (1 / 0) we can estimate with 2^{l+1} or 1 / 0 . Usually for the time of the special algorithm for the size 2 we obtain a better estimate as 2 / 0 . This allows to obtain a better estimate for higher values of 2. For example, if we can estimate this call by a constant, we can estimate the resulting call time by 1 / 2 .

If the time required for calls of f(0) cannot be simply estimated by a constant or this constant is too big, it may be estimated by the time for a single call (the first) and the time for calls of f(1) for the edge between the starting point and the point we have to find.

Now let's estimate the overall time necessary for the geometry description for a simple but typical application. Assume we have a regular isotropic grid with n points in each direction in the d-dimensional space. Assume we call at first f(0) with an optimal starting point, that means in a distance of the edge length of the grid g. Assume also that we call f(k) only for simplices with corners at grid points and with minimal size (bounded by g). Assume also that the geometry is regular enough so that for distances of order g it may be approximated by planes at least if we want to define the number of intersections. Consider a regular refinement step with factor 2. Instead of a single intersection of a k-side with a k-segment we have now 2^{d-k} such intersections. The function f(k) may be called only if we have an intersection of a (k-1)-segment for a side in the immediate neighbourhood. That's why the number of calls of f(k) can increase only by a factor 2^{d-k+1}, the number of calls of f(0) increases by the factor 2^d. The time for each call will not increase, it will even decrease because the distances will be later. For the overall time of cogeometry calls we obtain the maximal factor 2^d --- the same as for the number of nodes. So, usually we have the following The time required for cogeometry calls depends linearily on the number of nodes. Another situation of interest is the usage of a grid of the same density for the geometry description. This situation is typical for time-dependent processes, if the grid of the previous step will be used to define the new cogeometry. In this case, the two grids are usually of the same refinement level. To find the time behaviour in this case, we have to consider the refinement as of the new, as of the old grid. The number of calls will increase by the factor 2^d. But what can we say about the time for each call? Here we have to compare two effects:

As the result, the time required for each call remains approximately unchanged. Also in this situation we obtain approximately linear behaviour.

These considerations also show that the time for calls of f(k) for higher k is not relevant for greater number of nodes. It will be relevant only for a small number of nodes, if many cells of the grid have intersections with the boundary.

Topological Errors and Convexity

Another very interesting question for the cogemetry is the following: If we create a grid using the contravariant geometry description, is it possible to guarantee that there will be no topological errors? The answer to this question seems to be the greatest problem of the contravariant geometry description: In general, it is not possible to avoid topological errors without having any additional information. Indeed, there may be very thin subregions inside a region. If we find such a subregion depends on the grid density and accident. If the subregion is so thin that no point of the finest grid will be inside, we have no chance to detect that there is such a subregion.

Remark that this is a consequence of the possibility to define cogeometries with an infinite number of regions and other cogeometries with such infinite properties like Julia set's or the Mandelbrot set. Such geometries obviously have to be simplified by any finite grid.

On the other hand, this description shows that dangerous situations have some special structure. We can classify such dangerous subregions by a "codimension" which is simply the number of the "very thin directions". For codimension 1, we have a crack. For codimension 2 a channel, for the maximal codimension an enclave. There may be also segments of higher codimension which may be dangerous. They all have some common properties:

This leads to two strategies to avoid errors: The first strategy may be used in problems where topological errors in principle are not very dangerous if they are small enough. In the second strategy, we simply subdivide the environment into parts. The search operation for the additional boundaries and subboundaries allows to detect the thin segment.

Let's show that this strategy is universal. In principle, we have to consider only geometries which may be approximated by a grid. But a grid may be considered as a subdivision of the segments into convex parts. Now let's show that a geometry described only by convex segments is not dangerous: There is a grid generation algorithm which allows to define the cogeometry in a given region without topological errors (in exact real arithmetics) for an arbitrary finite cogeometry consisting only of convex segments. Let's describe this algorithm. We start with a simplicial grid containing the region of interest. In the first step, we detect the segment of all nodes of the grid by calls of f(0).

In the step k we detect the topology of the k-dimensional sides of the simplices, based on the correct detection of the topology of the sides. We use the function f(k) with all (k-1)-flags which have been detected on the boundary of the side. If we obtain more than one intersection point inside the side, we subdivide the side into parts. For the boundary between the parts, the previous step allows to define the geometry and topology correctly. This subdivision ends caused by the finite number of segments and the trivial fact that in general there may be only one intersection between a convex segment and a simplex of related codimension. Caused by the convexity of the segments, for every pair of flags we have found the line between them will be in the same segment. At least, we define the segments as the convex hull of all points of the segment we have found in the algorithm (inclusive the boundary points).

It can be easily proved per induction over the dimension of the sides and for each side dimension per induction over the dimension of the segments that every point on a side may be described by a convex linear combination of such points. The begin of induction over the sides are the corners defined with f(0). The begin of induction on each part of a side is the single intersection point on the side. We use the fact that a convex segment is planar and consider a line in this segment and the side through a point. This line ends on a boundary of the side or on a boundary point of the segment. The boundary point of the segment is part of a segment with lower dimension. Thus, for above ends we obtain such a required linear combination using the induction. That's why we obtain it also for our point.

We have not considered here the case of degeneration. But the geometry is defined by a finite number N of real numbers (the vertex coordinates). That's why, if in the algorithm we obtain a degeneration, we can simply restart the algorithm with an irrational modification of the initial values. At least after N such restarts (every time with an irrational modification independent of the previous) there will be no degeneracy. This proves the theorem. In reality, it is not necessary to subdivide all segments into approximately convex parts. Only small, very non-convex parts have to be modified. Usually, if all details of the geometry are so big that there will be some points of the grid inside, it is not necessary to make such subdivisions.

Subdivisions may be useful also to obtain sharp edges and corners. If we do not subdivide the faces of a boundary edge, the position of the edge may not be detected. That means, the edge or the corner will be rounded. To avoid this effect, we can subdivide the boundary face into two parts with a boundary line between them. Now, the previous algorithm easily detects the edge and computes explicitly grid nodes on the boundary line. Subdividing the boundary line by a vertex at the position of the corner also leads in this algorithm to an explicit grid node at the corner.