Let's consider now the first steps of our grid generation algorithm which are
connected with the octree-like data structure.
The creation of the coarse grid is the most trivial part of the grid generation
algorithm. The coarse grid is a regular rectangular grid --- a tensor product
grid of three one-dimensional grids in x-, y- and z-direction.
These one-dimensional grids have to be defined by the user. There are two
possibilities to define such a grid which can be combined:
- defining the list of the coordinates. There has to be defined
the length of the list and the coordinates in this direction. The
minimal list length is two.
- defining a regular a-priori refinement in every direction.
A special possibility is the definition of two-dimensional or one-dimensional
grid using list length 1 in z-direction or resp. in y- and z-direction.
The refinement procedure we use is a variant of the quadtree/octree
refinement procedure. The standard quadtree/octree refinement starts
with a regular rectangular (2D) or hexahedral (3D) grid. Then for
every octahedron refinement criteria will be tested. If the element
has to be refined, for the new, refined elements the criteria also
have to be tested. There also has to be done some
regularization. Usually, if an element has to be refined two times the
neighbour elements also have to be refined at least once.
The main difference between this approach and our algorithm is
the following: Instead of the refinement of elements we refine
edges. This method has the following advantages:
- It seems more natural to define refinement criteria on edges.
- As the refinement criteria as well as the procedures using the
refinement will be much more dimension-independent.
- An edge refinement creates one new point instead of two (2D) or
four (3D) as element refinement. So there may be a smaller number of
points in the resulting grid.
The refinement criteria have to be defined by the application and are
not part of the grid generation algorithm. There will be isotropical
(maximal edge length around a given point) and anisotropical criteria
(to refine an edge or not). They are realized in C as function
parameters.
The region containing a new node will be computed immediately after
the creation of this node in the octree structure by a call of the
first function of the contravariant geometry description. This allows to
use the region number and interpolated attribute values for the
evaluation of the refinement criteria.
A minimal regularization is part of the refinement algorithm. The
refinement level of neighbour edges in the same direction may have a
maximal difference one. This minimal regularization cannot be switched
off, a refinement tree without this property has to be considered as
incorrect. To avoid such an incorrect state of the refinement tree the
edge refinement procedure tests the correctness of the edge refinement
before this refinement will be done. If the test fails, the
refinement procedure will be called recursively for the lower-level
neighbour. This recursion stops at least at a coarse grid edge
(refinement level zero).
But to get a regular grid other grid regularization methods are
necessary. If a long side of a rectangle will be subdivided, the
parallel sides also have to be subdivided to avoid a spider. For the
other edges in the direct neighbourhood of the refined edge some
application-controlled regularization must be possible. For example,
it must be possible for the application to require isotropic
refinement. This type of regularization refinement is also part of the
refinement call. It will be realized by recursive calls of the
refinement function, but after the refinement of the edge. In all of
these cases the edge length will be the same or greater. So there may
be no infinite loops, but the number of edges which have to be refined
caused by regularization may be very big.
An error in the application-dependent refinement criteria can
lead to an infinite loop. To avoid this, we have included an absolute
lower bound for refinement. This bound may be also used for the
limitation of the node number.
In the next step, we compute the the intersections of the octree
elements (edges, sides, cuboids) with boundary segments. For this
purpose, we can use the contravariant geometry description. Moreover,
a geometry description which may be used in this algorithm, is
already similar to the contravariant geometry description.
We compute the intersections only after the initial refinement
step, because we assume that it is not possible to describe the
geometry correctly on a coarser grid.
- For edges with ends in different regions there must be an
intersection of the edge with a boundary face. We find this
intersection.
- For rectangles with intersections of it's border with boundary
faces which are not "paired" there have to be intersections of
boundary lines with this rectangle or other intersections of the
rectangle border with boundary faces. We find also these
intersections.
- In 3D we have to consider also "unpaired" intersections of
boundary lines with the border of a cube. Here we have to find a
boundary node --- the end of a boundary line --- in the cube or other
intersections of the boundary lines with the border of the cube. We
find also these intersections.
What can we do now with the intersections we have found? There are the
following possibilities:
- To shift the nearest regular grid point to this point.
- To include a new point.
- To ignore the intersection.
- To make further refinement.
Because further refinement is a necessary part of the boundary
grid computation, it was not difficult to include additional
application-dependent refinement criteria into the algorithm. So, it
is possible to use also boundary data (boundary conditions or attribute
functions defined on the boundary) for refinement.
What we have to do depends on the concrete geometrical situation
we have. We have to solve many problems:
- We have to avoid the creation of a "spider".
- We have to avoid situations which lead to incorrect "connection
values" for boundary edges or to edges which intersect the boundary
in the Delaunay grid which will be created later.
- We have to detect "dangerous" situations which require higher
refinement or additional test to compute the correct geometry.
- We have to minimize the number of additional nodes.
For example, if we have an intersection of the long edge in an
anisotropical grid, it may be the best to ignore it to avoid a
spider. But this can lead to Delaunay edges breaking through the
boundary. In this case, further refinement may be the best. but this
leads to a lot of additional nodes.
To classify the geometrical situations and to find an optimal
solution for the different situations is very difficult. The
optimization of this step may be one of the central parts of the
further development. The main principle here is "try and error" ---
if a bad situation occurs in an application, we have to consider this
situation, find a solution, implement it, and to hope that this change
will not have fatal results in other situations. Especially in 3D
around a boundary line or node this is not easy, and the current state
is not optimal.
Let's describe shortly some principles of the current
realization:
- The creation of a new point is realized as a combination of
refinement and point shift.
- The original position of a shifted point will be saved. To
compute the position of a refinement point on an edge with shifted
edge(s) the original position was used.
- If the new point has a lower distance to the boundary position
than the unshifted position of the (shifted) end of the edge, the new
point will be shifted to the boundary, and the old point returns to
the initial position.
- A boundary face point has to be located on a shorter edge.
- A boundary line point in a rectangle requires isotropic
refinement in the two rectangle directions. The orthogonal direction
may not have higher refinement.
- A boundary node in a cuboid requires an isotropic refinement
of this cuboid.