Skip to content
Ondrej Meca edited this page Apr 22, 2021 · 4 revisions

Mesh

Mesh is the holder for an unstructured mesh database. It is fundamental class of the library. It can be loaded from several commonly used databases. Mesh is used for simulation of a physical phenomena, then, results of the simulation can be stored.

Mesh parameters are stored inside classes located inside the src/mesh/store directory. All classes are structures of vectors using serializededata class. As the library is based on two levels decomposition, each class stores data for a current process only and data are divides into threads by parameter distribution. When the mesh is composed, one should be able to manipulate with data with nearest-neighbor operations only. The content of the classes is described below.

node store

A holder for all mesh nodes. Since a node can be held by more MPI processes, we define a node's holder as the process with the lower MPI rank that held a given node. This process performs global operations on the node (store, assign DOF index, etc.). Processing the node by other processes is skipped or assigned data are synchronized by a nearest-neighbors operation. During the synchronization, data can be exchanged by messages of the known length and with the defined order of nodes since nodes are stored sorted according to:

n1 < n2 iff n1.holder <= n2.holder && n1.id < n2.id

nodes

An example of a node ordering is in the figure. Highlighted nodes are held by a particular process. The array of nodes are described by three variables: nhalo, offset, and size that describe number of nodes on a particular process that are held by lower MPI processes, the total number of nodes held by the lower MPI processes, and size of nodes held by a process, respectively. The total number of nodes of a given mesh is stored in variable totalSize. Each node has also position that denotes a position of the node within an array containing nodes of all processes. Hence, the position of i-th node held by a process is given by i - nhalo + offset if nhalo <= i, otherwise the position is assigned by a holder. The position can be utilized during storing since it allows to compute data position without communication.

Nodes are described also by the following parameters:

  • distribution - distribution of nodes to threads
  • IDs - nodes IDs as was read from the input database
  • elements - list of elements that reference a given node
  • originCoordinates - original coordinates in the case of e.g. mesh morphing
  • coordinates - coordinates used during the simulation
  • ranks - list of MPI processes that also have a given node (note that if the first rank is the rank of a particular process, the process is holder of a given node)
  • domains - list of globally indexed domains that contain a given node
  • data - data assigned to nodes

element store

A holder for all mesh elements. Since no halo elements are stored, each element is unique within all processes (there is not any equivalent to nodes' parameter nhalo). Elements are described by a list of nodes (in defined order). Referenced nodes are in indexing within a process (procNodes) and within a domain (domainNodes). Global indexing of nodes has to be computed from nodes' IDs.

From the simulation point of view, elements are divided into clusters and each cluster is divided into domains. Hence, clusters define the first level decomposition and domains define the second level decomposition. Then, clusters are assigned to MPI processes and domains are assigned to OpenMP threads. In the optimal case, each MPI process has one cluster only and the number of domains is divided by the number of OpenMP threads (in addition, domains contain equal number of elements and clusters contain equal number of domains). From the FETI point of view, clusters have to be continuous. Hence, it is sometimes possible that elements on an MPI process are divided into more clusters (in order to make clusters continuous). In addition, domains are slightly imbalanced when a real geometry is used. Hence, the simulation process has to aware of this situation.

Similarly to nodes, there are parameters offset, size, and totalSize. The same parameters are also for bodies (bodiesOffset, bodiesSize, bodiesTotalSize), clusters (clustersOffset, clustersSize, clustersTotalSize), and domains (domainsOffset, domainsSize, domainsTotalSize). Elements are assigned to regions of elements. Occurrence in a region is stored in bit-array regions. The bit order is the same as order of regions in elementsRegions. The size of bit-array is regionMaskSize. Elements are divided into intervals according to occurrence in regions, occurrence in a domain, and element's type. All elements in a given interval have the same physical parameters.

Elements are described also by the following parameters:

  • dimension - mesh dimension
  • distribution - distribution of elements to threads
  • body
  • material
  • epointers - pointer to an element description
  • faceNeighbors - neighbors elements in global indexing (-1 for face without a neighbor)
  • edgeNeighbors - neighbors elements in global indexing (#number, neigh1, neigh2, ...)
  • domainDistribution - distribution of domains to threads
  • elementsDistribution - distribution of elements to domains
  • ecounters - total number of elements with a given type
  • eintervals - elements intervals
  • eintervalsDistribution - distribution of elements intervals to threads
  • data - data assigned to elements

region store

Regions are fundamental for definition of a mesh parameters and boundary conditions. Regions are of two types: regions of elements and boundary regions. Elements region is a set of elements. Boundary region is a set of boundary elements or a set of nodes. Regions can be reference in ecf by their name.

All parameters that describe nodes and elements are available also on each region. However, data are pointers to nodes and elements containers if possible. Parameters below are identical for both types of regions:

  • name
  • nodes - list of a region's nodes (offsets to nodes)
  • nodeInfo - description of a region's nodes (for storing)

boundary region store

Boundary regions are of two types: regions with dimension=0 are regions of nodes, regions with dimension>0 are regions of boundary elements. Boundary regions are described by the following parameters:

  • distribution - distribution of elements to threads
  • procNodes - nodes in a process indexing
  • triangles - triangularized boundary elements
  • epointers - pointers to elements desctiption
  • emembership - index of a parent element (the parent with the lower ID)
  • eintervals - elements intervals
  • eintervalsDistribution - distribution of elements intervals to threads

elements region store

A region of elements is a list of pointers to the elements store. Parameters of a elements region are similar to parameters of elements (restricted to a given region only).

FETI data store

This store contains data needed for the internal FETI solver:

  • domainDual - dual graph of domains
  • corners - corner nodes
  • innerFixPoints - fix-points for computation of kernel)
  • surfaceFixPoints - fix-points on the surface (for computation of kernel)
Clone this wiki locally