Skip to content

hmm

hmm

Base class for HMM solvers using DOLFINx.

hommx.hmm.BaseHMM

Bases: ABC

Abstract base class for Heterogeneous Multi-Scale Method solvers.

Provides common infrastructure for scalar and vector-valued HMM problems. Subclasses must implement: - _setup_macro_function_space(): Set up macro-scale function space - _setup_micro_function_space(): Set up micro-scale function space - _setup_cell_problem_forms(): Set up micro-scale cell problem forms, take a look at PoissonHMM for an example

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

__init__

__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable[[ufl.SpatialCoordinate], ufl.Form], msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_HMM')

Initializes the solver, meshes and boundary conditions.

Parameters:

Name Type Description Default
msh mesh.Mesh

The macro mesh, on which we want to solve the oscillatory PDE.

required
A Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]

The coefficient, should be callable like: A(x)(y), where x is a spatial coordinate on the macro mesh (the cell center c_T) and y is a ufl.SpatialCoordinate on the microscopic mesh. A needs to be 1-periodic in y.

required
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required
msh_micro mesh.Mesh

The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy.

required
eps float

The microscopic scaling parameter.

required
petsc_options_global_solve dict | None

PETSc solver options for the global solver.

None
petsc_options_cell_problem dict | None

PETSc solver options for the cell problem solver.

None
petsc_options_prefix str

Options prefix used for PETSc options.

'hommx_HMM'

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

solve

solve() -> fem.Function

Assemble the LHS, RHS and solve the problem

This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.

plot_solution

plot_solution(u: fem.Function | None = None)

Simple plot of the solution using pyvista.

Solve needs to be run before calling this. On parallel methods each process only plots the local part.

hommx.hmm.PoissonHMM

Bases: BaseHMM

Solver for the Multi-Scale Poisson problem using the HMM.

This class implements the Heterogenous-Multi-Scale Method for a Poisson problem. We want to solve the weak formulation of the Poisson problem:

\[ \int_\Omega (A\nabla u) \cdot \nabla v dx = \int_\Omega fv dx \]

With Dirichlet-Boundary Conditions \(u=0\) on \(\partial\Omega\).

We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form

\[ a_H(v_H, w_H) = \sum_{T\in \mathcal T_H} \frac{|T|}{|Y_\varepsilon(c_T)|} \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla R_{T, h}(v_h)\cdot \nabla R_{T, h}(w_h) dx, \]

where \(R_{T, h} = v_H|_{Y_\varepsilon(c_T)} + \tilde{v_h}, \tilde{v_h}\in V_{h, \#}(Y_\varepsilon(c_T))\) is the reconstruction operator, where \(\tilde{v_h}\) is the solution to

\[ \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla\tilde{v_h} \cdot \nabla z_h dx = - \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla v_H \cdot \nabla z_h dx \]

note that the gradient of the macro-scale function \(v_H\) appears on the RHS.

\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.

Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.

Notes
  • It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

__init__

__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable, msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_PoissonHMM')

Initializes the solver, meshes and boundary conditions.

Parameters:

Name Type Description Default
msh mesh.Mesh

The macro mesh, on which we want to solve the oscillatory PDE.

required
A Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]

The coefficient, should be callable like: A(x)(y), where x is a spatial coordinate on the macro mesh (the cell center c_T) and y is a ufl.SpatialCoordinate on the microscopic mesh. A needs to be 1-periodic in y.

required
f Callable

The right hand side of the problem.

required
msh_micro mesh.Mesh

The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy.

required
eps float

The microscopic scaling parameter.

required
petsc_options_global_solve dict | None

PETSc solver options for the global solver.

None
petsc_options_cell_problem dict | None

PETSc solver options for the cell problem solver.

None
petsc_options_prefix str

Options prefix used for PETSc options.

'hommx_PoissonHMM'

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

solve

solve() -> fem.Function

Assemble the LHS, RHS and solve the problem

This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.

plot_solution

plot_solution(u: fem.Function | None = None)

Simple plot of the solution using pyvista.

Solve needs to be run before calling this. On parallel methods each process only plots the local part.

hommx.hmm.PoissonStratifiedHMM

Bases: PoissonHMM

Solver for the Multi-Scale Poisson problem using the HMM.

This class implements the Heterogenous-Multi-Scale Method for a Poisson problem. We want to solve the weak formulation of the Poisson problem:

\[ \int_\Omega (A\nabla u) \cdot \nabla v dx = \int_\Omega fv dx \]

With Dirichlet-Boundary Conditions \(u=0\) on \(\partial\Omega\).

We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form

\[ a_H(v_H, w_H) = \sum_{T\in \mathcal T_H} \frac{|T|}{|Y_\varepsilon(c_T)|} \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla R_{T, h}(v_h)\cdot \nabla R_{T, h}(w_h) dx, \]

where \(\nabla R_{T, h} = v_H|_{Y_\varepsilon(c_T)} + (D\theta(x))^\top\tilde{v_h}, \tilde{v_h}\in V_{h, \#}(Y_\varepsilon(c_T))\) is the reconstruction operator, where \(\tilde{v_h}\) is the solution to

\[ \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) (D\theta(x))^\top\nabla\tilde{v_h} \cdot (D\theta(x))^\top\nabla z_h dx = - \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) \nabla v_H \cdot (D\theta(x))^\top\nabla z_h dx \]

note that the gradient of the macro-scale function \(v_H\) appears on the RHS.

\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.

Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.

Notes
  • It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

__init__

__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: ufl.form, msh_micro: mesh.Mesh, eps: float, Dtheta_transpose: Callable[[fem.Constant], ufl.Form], petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_PoissonStratifiedHMM')

Initializes the solver, meshes and boundary conditions.

Parameters:

Name Type Description Default
msh mesh.Mesh

The macro mesh, on which we want to solve the oscillatory PDE.

required
A Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]

The coefficient, should be callable like: A(x)(y), where x is a spatial coordinate on the macro mesh (the cell center c_T) and y is a ufl.SpatialCoordinate on the microscopic mesh. A needs to be 1-periodic in y.

required
f ufl.form

The right hand side of the problem.

required
msh_micro mesh.Mesh

The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy.

required
eps float

The microscopic scaling parameter.

required
Dtheta_transpose Callable[[fem.Constant], ufl.Form]

transpose of the jacobian, i.e. \((\frac{\partial \theta_j}{\partial x_i})_{i, j=1, ..., d}\)

required
petsc_options_global_solve dict | None

PETSc solver options for the global solver.

None
petsc_options_cell_problem dict | None

PETSc solver options for the cell problem solver.

None
petsc_options_prefix str

Options prefix used for PETSc options.

'hommx_PoissonStratifiedHMM'

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

solve

solve() -> fem.Function

Assemble the LHS, RHS and solve the problem

This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.

plot_solution

plot_solution(u: fem.Function | None = None)

Simple plot of the solution using pyvista.

Solve needs to be run before calling this. On parallel methods each process only plots the local part.

hommx.hmm.LinearElasticityHMM

Bases: BaseHMM

Solver for the Multi-Scale Linear Elasticity problem using the HMM.

This class implements the Heterogenous-Multi-Scale Method for a Linear elasticity problem. We want to solve the weak formulation of the Linear elasticity problem:

Note that in this case \(A = A_{ijkl}\) is a fourth order tensor and \(e(u)\) is the strain of u and a matrix, i.e. $e(u)_{ij} = 1/2 (\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_j}). We define \(Am = (A_{ijkl}m_{kl})_{ij}\)

\[ \int_\Omega (A e(u)) : e(v) dx = \int_\Omega f \cdot v dx \]

Note that we do not impose any Boundary condition by default. They have to be set by the user using set_boundary_conditions

We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form

\[ a_H(v_H, w_H) = \sum_{T\in \mathcal T_H} \frac{|T|}{|Y_\varepsilon(c_T)|} \int_{Y_\varepsilon(c_T)} (A(c_T, \frac{x}{\varepsilon}) (e(v_h) + e(\tilde{v_h})) : (e(v_h) + e(\tilde{v_h}) dx, \]

where \(\tilde{v_h}\) is the solution to

\[ \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) e(\tilde{v_h}) : e(z_h) dx = - \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) e(v_H) : e(z_h) dx \]

note that the gradient of the macro-scale function \(v_H\) appears on the RHS.

\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.

Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.

Notes
  • It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

__init__

__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable, msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_LinearElasticityHMM')

Initializes the solver, meshes and boundary conditions.

Parameters:

Name Type Description Default
msh mesh.Mesh

The macro mesh, on which we want to solve the oscillatory PDE.

required
A Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]

The coefficient, should be callable like: A(x)(y), where x is a spatial coordinate on the macro mesh (the cell center c_T) and y is a ufl.SpatialCoordinate on the microscopic mesh. A needs to be 1-periodic in y.

required
f Callable

The right hand side of the problem.

required
msh_micro mesh.Mesh

The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy.

required
eps float

The microscopic scaling parameter.

required
petsc_options_global_solve dict | None

PETSc solver options for the global solver.

None
petsc_options_cell_problem dict | None

PETSc solver options for the cell problem solver.

None
petsc_options_prefix str

Options prefix used for PETSc options.

'hommx_LinearElasticityHMM'

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

solve

solve() -> fem.Function

Assemble the LHS, RHS and solve the problem

This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.

plot_solution

plot_solution(u: fem.Function | None = None)

Simple plot of the solution using pyvista.

Solve needs to be run before calling this. On parallel methods each process only plots the local part.

hommx.hmm.LinearElasticityStratifiedHMM

Bases: LinearElasticityHMM

Solver for the Multi-Scale Linear Elasticity problem using the HMM.

This class implements the Heterogenous-Multi-Scale Method for a Linear elasticity problem. We want to solve the weak formulation of the Linear elasticity problem:

Note that in this case \(A = A_{ijkl}\) is a fourth order tensor and \(e(u)\) is the strain of u and a matrix, i.e. $e(u)_{ij} = 1/2 (\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_j}). We define \(Am = (A_{ijkl}m_{kl})_{ij}\)

\[ \int_\Omega (A e(u)) : e(v) dx = \int_\Omega f \cdot v dx \]

Note that we do not impose any Boundary condition by default. They have to be set by the user using set_boundary_conditions

We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form

\[ a_H(v_H, w_H) = \sum_{T\in \mathcal T_H} \frac{|T|}{|Y_\varepsilon(c_T)|} \int_{Y_\varepsilon(c_T)} (A(c_T, \frac{x}{\varepsilon}) (e(v_h) + e(\tilde{v_h})) : (e(v_h) + e(\tilde{v_h}) dx, \]

where \(\tilde{v_h}\) is the solution to

\[ \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) e(\tilde{v_h}) : e(z_h) dx = - \int_{Y_\varepsilon(c_T)} A(c_T, \frac{x}{\varepsilon}) e(v_H) : e(z_h) dx \]

note that the gradient of the macro-scale function \(v_H\) appears on the RHS.

\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.

Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.

Notes
  • It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

__init__

__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable, msh_micro: mesh.Mesh, eps: float, Dtheta_transpose: Callable[[fem.Constant], ufl.Form], petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_LinearElasticityHMM')

Initializes the solver, meshes and boundary conditions.

Parameters:

Name Type Description Default
msh mesh.Mesh

The macro mesh, on which we want to solve the oscillatory PDE.

required
A Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]

The coefficient, should be callable like: A(x)(y), where x is a spatial coordinate on the macro mesh (the cell center c_T) and y is a ufl.SpatialCoordinate on the microscopic mesh. A needs to be 1-periodic in y.

required
f Callable

The right hand side of the problem.

required
msh_micro mesh.Mesh

The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy.

required
eps float

The microscopic scaling parameter.

required
Dtheta_t

transpose of the jacobian, i.e. \((\frac{\partial \theta_j}{\partial x_i})_{i, j=1, ..., d}\)

required
petsc_options_global_solve dict | None

PETSc solver options for the global solver.

None
petsc_options_cell_problem dict | None

PETSc solver options for the cell problem solver.

None
petsc_options_prefix str

Options prefix used for PETSc options.

'hommx_LinearElasticityHMM'

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

solve

solve() -> fem.Function

Assemble the LHS, RHS and solve the problem

This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.

plot_solution

plot_solution(u: fem.Function | None = None)

Simple plot of the solution using pyvista.

Solve needs to be run before calling this. On parallel methods each process only plots the local part.

hommx.hmm.BasePeriodicHMM

Bases: ABC

Abstract base class for periodic homogenization.

Handles MPC setup, solves one cell problem per direction, and lets subclasses provide the micro function space, bilinear form, RHS, and flux expression used to build A_hom.

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

compute_effective_tensor

compute_effective_tensor() -> np.ndarray

Solve one periodic cell problem per direction and return A_hom.

hommx.hmm.PoissonPeriodicHMM

Bases: BasePeriodicHMM

Periodic homogenization for scalar diffusion (A = A(y)).

function_space property

function_space: fem.FunctionSpace

Function space of the macro mesh.

set_boundary_conditions

set_boundary_conditions(bcs: list[fem.DirichletBC] | fem.DirichletBC)

Set boundary conditions.

Parameters:

Name Type Description Default
bcs list[fem.DirichletBC] | fem.DirichletBC

Single BC or list of BCs

required

set_right_hand_side

set_right_hand_side(f: Callable[[ufl.SpatialCoordinate], ufl.Form])

Sets the right hand side

Parameters:

Name Type Description Default
f Callable[[ufl.SpatialCoordinate], ufl.Form]

The right hand side of the problem.

required

compute_effective_tensor

compute_effective_tensor() -> np.ndarray

Solve one periodic cell problem per direction and return A_hom.

hommx.hmm._triangle_area

_triangle_area(points)

hommx.hmm._tetrahedron_volume

_tetrahedron_volume(points)

hommx.hmm._unroll_dofs

_unroll_dofs(dofs: np.ndarray, bs: int, dtype=PETSc.IntType)

unrolls blocked dofs into array indices

hommx.hmm._local_to_global_unrolled

_local_to_global_unrolled(dofs: np.ndarray, V: fem.FunctionSpace)

maps local to global dofs, for unrolled dofs.