The subdivision algorithm allows to obtain the correct result also for greater simplices. It works so:
In a typical regular situation, the number of intersections with inner boundaries will be small. Thus, if the number of steps will be great, an infinite loop may be assumed. Thus, it seems useful to break the loop after a maximal number of steps which may be not very large. The last (k-1)-flag after a break lies inside the simplex. The value may be returned as a k-flag on some artificial "error boundary". This allows to continue the computation.
Another idea for the error handling is to restart the computation with a temporary smaller value of epsilon. This seems useful, because an incorrect value for epsilon seems to be the most probable error.
In the case of multiple intersections of the (k-1)-segment with the border of the simplex, it is possible that the algorithm finds not the first intersection of this segment with the border. Such a result is incorrect. But this error usually occurs only for big simplices. That's why further subdivision may be used to avoid such errors.
This default function f(k) returns only one unique "default segment", it does not allow to transfer nontrivial attributes of the k-boundary. The default function f(k+1) will not lead to any subdivision of this default k-segment, that means it does not create any (k+1)-flag. So, there will be no possible input values for the function f(k+2), the geometry has the codimension k. Thus, the implementation of the default functions f(k) and f(k+1) allows to use the first functions f(i) for i < k of a cogeometry as a correct, complete cogeometry of codimension k.
This leads to a powerful "fast prototyping" strategy. In the first step, only the function f(0) has to be implemented. Later, the other functions f(k) may be implemented step by step. Every step leads to a better, more accurate and more powerful description of the resulting geometry, but even the first step leads to a complete geometry description which may be used as a prototype of the correct cogeometry.
The algorithm may be slightly modified for the case of f(2). Even if there is no subdivision of the boundary face into parts, boundary faces with different pairs of neighbour regions may be considered as different. So, a geometry with nontrivial boundary lines (that means with codimension two) may be created even if only f(0) is defined.
The other flag points in the algorithm are in the simplex. That means, they usually will be not orthogonal to the boundary. This algorithm was one of the reasons to use flags instead of intersection points as in the first variant and the implementation in IBGD. The (k-1)-part of the initial flag is necessary in this algorithm.
To compute the related positions it is in the nonorthogonal case not necessary to compute something about the pre-image. This may be dangerous if a rounding error moves the point out of the image. We can simply use the barycentric coordinates of the resulting points in the image simplex in Y to compute their position in the original simplex in X.
Thus, we have an easy, straightforward algorithm for the affine case. For the smooth case, we simply can use subdivision into smaller simplices until the simplices are small enough to allow the approximation by the affine algorithm.
We can modify the algorithm to make it faster: Instead of subdividing the simplex into equal parts, we can use the barycentric coordinates of the result flag in Y to subdivide the simplex.
A variant of the algorithm may be used for piecewise affine mappings. We explicitly subdivide the simplex into pieces so that the mapping is affine on every piece, and use the affine variant on each piece. This seems to be useful for mappings with a small number of different parts, for example for the continuation of a geometry defined in a cube to the outside.
In general, a k-segment S_k of the intersection will be described by an i-segment S_i of the first and an (k-i)-segment S_k-i of the other cogeometry. The codimension of the resulting geometry is the sum of the codimensions of the two cogeometries.
The general scheme will be analogical to the case of an induced geometry. We can use subdivision until the simplices are small enough to be approximated by the affine situation. To consider the affine situation for some fixed pair (k, i) is straightforward but complicate to implement.
For the first functions the implementation is much more trivial:
Remark that the basic segment is not necessarily a region. It may be also a boundary segment of arbitrary codimension.
The simplest possibility is to define a 2D cogeometry by a picture using the principle one color --- one region. This allows to use paint programs to define 2D cogeometries. How to implement this is straightforward. We only have to implement the region function.
Another possibility is to use the color value of the pixel to encode one, two or three functions on the 2D space. These functions may be used or interpreted in a different way. For example, they may be used to define a mapping into the three-dimensional "color space", and any cogeometry in this color space may be used to define an induced 2D cogeometry. Another example is to use the color value of a geographical map to define the elevation. Then this elevation may be used to define the 3D cogeometry of this region.
If we exclude the problem how to find the necessary simplices in the data structure, the algorithm is straightforward. In the general case, we can really compute the intersection line and travel along this line to find the correct continuation as defined by the formal definition also for big simplices. A lot of code to handle degenerate cases will be necessary in the implementation, but our general strategy to handle these cases is sufficient. Thus, the central problem for the algorithm is to find the simplices we need, especially if the number of simplices is very large.
For the functions f(k) with k > 0 we have always an input flag which position in the simplex data structure was already found at the call which has created this flag as output. To avoid a double search, it will be enough to save the related information into the flag data. The standard mechanism to transfer attributes may be used for this purpose. Remark, that this information can also help to handle degenerated cases. For a flag on the boundary between different simplices we use on of these simplices to define the segment of the flag. If we have saved the information about this simplex, we have saved also the information how we have managed this degeneration.
If we want to travel along the intersection line, we have to switch from a given simplex to it's neighbour. Thus, to obtain an efficient implementation the related neighbourhood information must be available very fast. In this case, we have no problem to get an algorithm which is efficient enough for the functions f(k) with k > 0.
Only for the case of the region function f(0) we have no input flag which may be used for the start of the neighbourhood search. In principle, a good starting point will be one of the nearest previously found points. But the information about this is not available in the function f(0) itself. But often it will be available in the function which calls f(0). That's why, to allow the usage of this information, we have to modify the interface. We must allow the calling function to transfer the pointer to a good starting point so that it may be used by f(0).
This may be easily realized in the C++ interface. We simply include some pointer to a starting point into the cogeometry class and allow to modify this value. It will be initialized with the null pointer, and after each call of f(0) it will be the argument of this call. Thus, if the calling function does not use this possibility, the position of the last point will be used as the starting point. This seems to be an optimal solution of this problem.
If we have a good starting point, the same algorithm as used for f(1) may be used also for f(0). The only difference is that in the case of f(1) we return immediately if we find a boundary intersection, in the case of f(0) we have to travel form the start to the end of the edge.
Thus, we have to find the boundary simplex which has the first intersection with a given edge. For a small number of simplices, we can use a simple loop over all simplices. To make the algorithm fast enough for a large number of simplices, we have to create an additional data structure which helps us to find this simplex. The usual way is to use a search tree. Another possibility is to add a grid in the regions, e.g. using Delaunay techniques.
Another problem is the degeneration. If the edge intersects approximately the common border of different boundary face simplices, we have to consider all these simplices together and cannot use simply the first intersection, because rounding errors may lead to an incorrect order of these intersections on the edge.
Thus, in principle it is possible to implement a fast algorithm for a boundary grid, but this type of input may be considered as the "worst case" for the cogeometry.
In principle, this time depends on the size of the data of the geometry description. E.g. for the case of the boundary grid with search tree we obtain a logarithmic dependence on the size of the boundary grid. If the typical size of the simplex 3 in a volume or boundary grid is smaller than 0 (that means we use an input grid which is too fine) we obtain an additional factor 0 / 3.
A regular geometry is a geometry so that the length of the intersection line of the segment with the simplex may be estimated by1 and the number of intersections of the k-segments with a k-simplex may be estimated by a constant. In a regular geometry, the time for a call of f(k) may be estimated by 1 / 0. At first, we consider the variant of the algorithm with 2 = 0. This can be done, because usually a variant with smaller 2 is more accurate and greater values of 2 will be used only for the reason of time efficiency.
The time for the call of the lowest level simplices is estimated by the previous lemma. The time required for creation and subdivision of each simplex may be estimated by a constant. The number of steps in the loop in each simplex may be estimated by the number of intersections with the internal planes which is bounded because the geometry is regular. That's why the time for a call of each simplex excluding the call time for the sub-simplices may be estimated by a constant.
The length of the intersection line may be estimated by1. For each refinement level k we subdivide this line into 2^k parts of length k there k = 1 / 2^k is the size of a simplex of this level. The number of simplices containing such a part is fixed by the dimension, and because of the regularity the number of simplex calls on this level which is necessary to travel along this part may be estimated by a constant. Thus, the number of simplex calls in level k can be estimated by 2^k. The sum over all levels from 0 to l = \log_2 (1 / 0) we can estimate with 2^{l+1} or 1 / 0 . Usually for the time of the special algorithm for the size 2 we obtain a better estimate as 2 / 0 . This allows to obtain a better estimate for higher values of 2. For example, if we can estimate this call by a constant, we can estimate the resulting call time by 1 / 2 .
If the time required for calls of f(0) cannot be simply estimated by a constant or this constant is too big, it may be estimated by the time for a single call (the first) and the time for calls of f(1) for the edge between the starting point and the point we have to find.
Now let's estimate the overall time necessary for the geometry description for a simple but typical application. Assume we have a regular isotropic grid with n points in each direction in the d-dimensional space. Assume we call at first f(0) with an optimal starting point, that means in a distance of the edge length of the grid g. Assume also that we call f(k) only for simplices with corners at grid points and with minimal size (bounded by g). Assume also that the geometry is regular enough so that for distances of order g it may be approximated by planes at least if we want to define the number of intersections. Consider a regular refinement step with factor 2. Instead of a single intersection of a k-side with a k-segment we have now 2^{d-k} such intersections. The function f(k) may be called only if we have an intersection of a (k-1)-segment for a side in the immediate neighbourhood. That's why the number of calls of f(k) can increase only by a factor 2^{d-k+1}, the number of calls of f(0) increases by the factor 2^d. The time for each call will not increase, it will even decrease because the distances will be later. For the overall time of cogeometry calls we obtain the maximal factor 2^d --- the same as for the number of nodes. So, usually we have the following The time required for cogeometry calls depends linearily on the number of nodes. Another situation of interest is the usage of a grid of the same density for the geometry description. This situation is typical for time-dependent processes, if the grid of the previous step will be used to define the new cogeometry. In this case, the two grids are usually of the same refinement level. To find the time behaviour in this case, we have to consider the refinement as of the new, as of the old grid. The number of calls will increase by the factor 2^d. But what can we say about the time for each call? Here we have to compare two effects:
These considerations also show that the time for calls of f(k) for higher k is not relevant for greater number of nodes. It will be relevant only for a small number of nodes, if many cells of the grid have intersections with the boundary.
Remark that this is a consequence of the possibility to define cogeometries with an infinite number of regions and other cogeometries with such infinite properties like Julia set's or the Mandelbrot set. Such geometries obviously have to be simplified by any finite grid.
On the other hand, this description shows that dangerous situations have some special structure. We can classify such dangerous subregions by a "codimension" which is simply the number of the "very thin directions". For codimension 1, we have a crack. For codimension 2 a channel, for the maximal codimension an enclave. There may be also segments of higher codimension which may be dangerous. They all have some common properties:
Let's show that this strategy is universal. In principle, we have to consider only geometries which may be approximated by a grid. But a grid may be considered as a subdivision of the segments into convex parts. Now let's show that a geometry described only by convex segments is not dangerous: There is a grid generation algorithm which allows to define the cogeometry in a given region without topological errors (in exact real arithmetics) for an arbitrary finite cogeometry consisting only of convex segments. Let's describe this algorithm. We start with a simplicial grid containing the region of interest. In the first step, we detect the segment of all nodes of the grid by calls of f(0).
In the step k we detect the topology of the k-dimensional sides of the simplices, based on the correct detection of the topology of the sides. We use the function f(k) with all (k-1)-flags which have been detected on the boundary of the side. If we obtain more than one intersection point inside the side, we subdivide the side into parts. For the boundary between the parts, the previous step allows to define the geometry and topology correctly. This subdivision ends caused by the finite number of segments and the trivial fact that in general there may be only one intersection between a convex segment and a simplex of related codimension. Caused by the convexity of the segments, for every pair of flags we have found the line between them will be in the same segment. At least, we define the segments as the convex hull of all points of the segment we have found in the algorithm (inclusive the boundary points).
It can be easily proved per induction over the dimension of the sides and for each side dimension per induction over the dimension of the segments that every point on a side may be described by a convex linear combination of such points. The begin of induction over the sides are the corners defined with f(0). The begin of induction on each part of a side is the single intersection point on the side. We use the fact that a convex segment is planar and consider a line in this segment and the side through a point. This line ends on a boundary of the side or on a boundary point of the segment. The boundary point of the segment is part of a segment with lower dimension. Thus, for above ends we obtain such a required linear combination using the induction. That's why we obtain it also for our point.
We have not considered here the case of degeneration. But the geometry is defined by a finite number N of real numbers (the vertex coordinates). That's why, if in the algorithm we obtain a degeneration, we can simply restart the algorithm with an irrational modification of the initial values. At least after N such restarts (every time with an irrational modification independent of the previous) there will be no degeneracy. This proves the theorem. In reality, it is not necessary to subdivide all segments into approximately convex parts. Only small, very non-convex parts have to be modified. Usually, if all details of the geometry are so big that there will be some points of the grid inside, it is not necessary to make such subdivisions.
Subdivisions may be useful also to obtain sharp edges and corners. If we do not subdivide the faces of a boundary edge, the position of the edge may not be detected. That means, the edge or the corner will be rounded. To avoid this effect, we can subdivide the boundary face into two parts with a boundary line between them. Now, the previous algorithm easily detects the edge and computes explicitly grid nodes on the boundary line. Subdividing the boundary line by a vertex at the position of the corner also leads in this algorithm to an explicit grid node at the corner.