Skip to content

hmm

hmm

Implementation of the Heterogenous-Multi-Scale-Method using dolfinx.

Classes:

Name Description
PoissonHMM

Solver for the HMM on the Poisson Equation.

hommx.hmm.REFERENCE_EVALUATION_POINT module-attribute

REFERENCE_EVALUATION_POINT = np.array([[1 / 3, 1 / 3]])

hommx.hmm.PoissonHMM

Solver for the Multi-Scale Poisson problem using the HMM.

This class implements the Heterogenous-Multi-Scale Method for a Poisson problem. We want to solve the weak formulation of the Poisson problem:

\[ \int_\Omega (A\nabla u) \cdot \nabla v dx = \int_\Omega fv dx \]

With Dirichlet-Boundary Conditions \(u=0\) on \(\partial\Omega\).

We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form

\[ a_H(v_H, w_H) = \sum_{T\in \mathcal T_H} \frac{|T|}{|Y_\varepsilon(c_T)|} \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla R_{T, h}(v_h)\cdot \nabla R_{T, h} dx, \]

where \(R_{T, h} = v_H|_{Y_\varepsilon(c_T)} + \tilde{v_h}, \tilde{v_h}\in V_{h, \#}(Y_\varepsilon(c_T))\) is the reconstruction operator, where \(\tilde{v_h}\) is the solution to

\[ \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla\tilde{v_h} \cdot \nabla z_h dx = - \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla v_H \cdot \nabla z_h dx \]

note that the gradient of the macro-scale function \(v_H\) appears on the RHS.

\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.

Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.

Notes
  • For now only zero-Dirichlet Boundary Conditions are implemented.
  • It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.

__init__

__init__(msh: mesh.Mesh, A: Callable[[np.ndarray[tuple[int], np.dtype[float]]], Callable[[ufl.SpatialCoordinate], ufl.Form]], f: ufl.form, msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_PoissonHMM')

Initializes the solver, meshes and boundary condtions.

Parameters:

Name Type Description Default
msh mesh.Mesh

The macro mesh, on which we want to solve the oscillatory Poisson equation. This mesh can live on MPI.COMM_WORLD and the cell problems are automatically solved on each process in parallel.

required
A Callable[[np.ndarray[tuple[int], np.dtype[float]]], Callable[[ufl.SpatialCoordinate], ufl.Form]]

The coefficient, it should be callable like: A(x)(y), where \(x\) is a spatial coordinate on the macro mesh (the cell center \(c_T\)) and \(y\) is a ufl.SpatialCoordinate on the microscopic mesh, that is passed to dolfinx to solve the cell problem. A needs to be 1-periodic in y, at least for the theory to work.

required
f ufl.form

The right hand side of the Poisson problem.

required
msh_micro mesh.Mesh

The microscopic mesh, this needs to be the unit-square. Further it needs to live on MPI.COMM_SELF, since every process owns a whole copy of the microscopic mesh. If any other communicator but MPI.COMM_SELF is used the results will most likely be rubish.

required
eps float

\(\varepsilon\), the microscopic scaling. Note that this needs to be small enough, so that the cells live entirely within their corresponding element. If this is not the case, results may be rubish.

required
petsc_options_global_solve optional

PETSc solver options for the global solver, see

None
petsc_options_cell_problem optional

PETSc solver options for the global solver, see

None
petsc_options_prefix optional

options prefix used for PETSc options. Defaults to "hommx_PoissonHMM".

'hommx_PoissonHMM'

_assemble_stiffness

_assemble_stiffness()

Assembly of the stiffness matrix in parallel.

_solve_cell_problem

_solve_cell_problem(cell_index: int) -> np.ndarray[tuple[int, int], np.dtype[float]]

Solves the cell problem on one cell. All computation is done on one process and no communication takes place between the processes.

Parameters:

Name Type Description Default
cell_index int

process-local index of the cell for which the homogenized coefficient

required

Returns:

Type Description
np.ndarray[tuple[int, int], np.dtype[float]]

np.ndarray: local stiffness matrix

solve

solve() -> fem.Function

Assemble the LHS, RHS and solve the problem

This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.

plot_solution

plot_solution()

Simple plot of the solution using pyvista.

Solve needs to be run before calling this. On parallel methods each process only plots the local part.

hommx.hmm._triangle_area

_triangle_area(points)