hmm
hmm
¶
Base class for HMM solvers using DOLFINx.
hommx.hmm.BaseHMM
¶
Bases: ABC
Abstract base class for Heterogeneous Multi-Scale Method solvers.
Provides common infrastructure for scalar and vector-valued HMM problems.
Subclasses must implement:
- _setup_macro_function_space(): Set up macro-scale function space
- _setup_micro_function_space(): Set up micro-scale function space
- _setup_cell_problem_forms(): Set up micro-scale cell problem forms, take a look at PoissonHMM for an example
__init__
¶
__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable[[ufl.SpatialCoordinate], ufl.Form], msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_HMM')
Initializes the solver, meshes and boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msh
|
mesh.Mesh
|
The macro mesh, on which we want to solve the oscillatory PDE. |
required |
A
|
Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]
|
The coefficient, should be callable like: |
required |
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
msh_micro
|
mesh.Mesh
|
The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy. |
required |
eps
|
float
|
The microscopic scaling parameter. |
required |
petsc_options_global_solve
|
dict | None
|
PETSc solver options for the global solver. |
None
|
petsc_options_cell_problem
|
dict | None
|
PETSc solver options for the cell problem solver. |
None
|
petsc_options_prefix
|
str
|
Options prefix used for PETSc options. |
'hommx_HMM'
|
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
solve
¶
Assemble the LHS, RHS and solve the problem
This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.
plot_solution
¶
Simple plot of the solution using pyvista.
Solve needs to be run before calling this. On parallel methods each process only plots the local part.
hommx.hmm.PoissonHMM
¶
Bases: BaseHMM
Solver for the Multi-Scale Poisson problem using the HMM.
This class implements the Heterogenous-Multi-Scale Method for a Poisson problem. We want to solve the weak formulation of the Poisson problem:
With Dirichlet-Boundary Conditions \(u=0\) on \(\partial\Omega\).
We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form
where \(R_{T, h} = v_H|_{Y_\varepsilon(c_T)} + \tilde{v_h}, \tilde{v_h}\in V_{h, \#}(Y_\varepsilon(c_T))\) is the reconstruction operator, where \(\tilde{v_h}\) is the solution to
note that the gradient of the macro-scale function \(v_H\) appears on the RHS.
\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.
Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.
Notes
- It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.
__init__
¶
__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable, msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_PoissonHMM')
Initializes the solver, meshes and boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msh
|
mesh.Mesh
|
The macro mesh, on which we want to solve the oscillatory PDE. |
required |
A
|
Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]
|
The coefficient, should be callable like: |
required |
f
|
Callable
|
The right hand side of the problem. |
required |
msh_micro
|
mesh.Mesh
|
The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy. |
required |
eps
|
float
|
The microscopic scaling parameter. |
required |
petsc_options_global_solve
|
dict | None
|
PETSc solver options for the global solver. |
None
|
petsc_options_cell_problem
|
dict | None
|
PETSc solver options for the cell problem solver. |
None
|
petsc_options_prefix
|
str
|
Options prefix used for PETSc options. |
'hommx_PoissonHMM'
|
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
solve
¶
Assemble the LHS, RHS and solve the problem
This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.
plot_solution
¶
Simple plot of the solution using pyvista.
Solve needs to be run before calling this. On parallel methods each process only plots the local part.
hommx.hmm.PoissonStratifiedHMM
¶
Bases: PoissonHMM
Solver for the Multi-Scale Poisson problem using the HMM.
This class implements the Heterogenous-Multi-Scale Method for a Poisson problem. We want to solve the weak formulation of the Poisson problem:
With Dirichlet-Boundary Conditions \(u=0\) on \(\partial\Omega\).
We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form
where \(\nabla R_{T, h} = v_H|_{Y_\varepsilon(c_T)} + (D\theta(x))^\top\tilde{v_h}, \tilde{v_h}\in V_{h, \#}(Y_\varepsilon(c_T))\) is the reconstruction operator, where \(\tilde{v_h}\) is the solution to
note that the gradient of the macro-scale function \(v_H\) appears on the RHS.
\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.
Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.
Notes
- It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.
__init__
¶
__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: ufl.form, msh_micro: mesh.Mesh, eps: float, Dtheta_transpose: Callable[[fem.Constant], ufl.Form], petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_PoissonStratifiedHMM')
Initializes the solver, meshes and boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msh
|
mesh.Mesh
|
The macro mesh, on which we want to solve the oscillatory PDE. |
required |
A
|
Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]
|
The coefficient, should be callable like: |
required |
f
|
ufl.form
|
The right hand side of the problem. |
required |
msh_micro
|
mesh.Mesh
|
The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy. |
required |
eps
|
float
|
The microscopic scaling parameter. |
required |
Dtheta_transpose
|
Callable[[fem.Constant], ufl.Form]
|
transpose of the jacobian, i.e. \((\frac{\partial \theta_j}{\partial x_i})_{i, j=1, ..., d}\) |
required |
petsc_options_global_solve
|
dict | None
|
PETSc solver options for the global solver. |
None
|
petsc_options_cell_problem
|
dict | None
|
PETSc solver options for the cell problem solver. |
None
|
petsc_options_prefix
|
str
|
Options prefix used for PETSc options. |
'hommx_PoissonStratifiedHMM'
|
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
solve
¶
Assemble the LHS, RHS and solve the problem
This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.
plot_solution
¶
Simple plot of the solution using pyvista.
Solve needs to be run before calling this. On parallel methods each process only plots the local part.
hommx.hmm.LinearElasticityHMM
¶
Bases: BaseHMM
Solver for the Multi-Scale Linear Elasticity problem using the HMM.
This class implements the Heterogenous-Multi-Scale Method for a Linear elasticity problem. We want to solve the weak formulation of the Linear elasticity problem:
Note that in this case \(A = A_{ijkl}\) is a fourth order tensor and \(e(u)\) is the strain of u and a matrix, i.e. $e(u)_{ij} = 1/2 (\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_j}). We define \(Am = (A_{ijkl}m_{kl})_{ij}\)
Note that we do not impose any Boundary condition by default.
They have to be set by the user using set_boundary_conditions
We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form
where \(\tilde{v_h}\) is the solution to
note that the gradient of the macro-scale function \(v_H\) appears on the RHS.
\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.
Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.
Notes
- It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.
__init__
¶
__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable, msh_micro: mesh.Mesh, eps: float, petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_LinearElasticityHMM')
Initializes the solver, meshes and boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msh
|
mesh.Mesh
|
The macro mesh, on which we want to solve the oscillatory PDE. |
required |
A
|
Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]
|
The coefficient, should be callable like: |
required |
f
|
Callable
|
The right hand side of the problem. |
required |
msh_micro
|
mesh.Mesh
|
The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy. |
required |
eps
|
float
|
The microscopic scaling parameter. |
required |
petsc_options_global_solve
|
dict | None
|
PETSc solver options for the global solver. |
None
|
petsc_options_cell_problem
|
dict | None
|
PETSc solver options for the cell problem solver. |
None
|
petsc_options_prefix
|
str
|
Options prefix used for PETSc options. |
'hommx_LinearElasticityHMM'
|
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
solve
¶
Assemble the LHS, RHS and solve the problem
This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.
plot_solution
¶
Simple plot of the solution using pyvista.
Solve needs to be run before calling this. On parallel methods each process only plots the local part.
hommx.hmm.LinearElasticityStratifiedHMM
¶
Bases: LinearElasticityHMM
Solver for the Multi-Scale Linear Elasticity problem using the HMM.
This class implements the Heterogenous-Multi-Scale Method for a Linear elasticity problem. We want to solve the weak formulation of the Linear elasticity problem:
Note that in this case \(A = A_{ijkl}\) is a fourth order tensor and \(e(u)\) is the strain of u and a matrix, i.e. $e(u)_{ij} = 1/2 (\frac{\partial u_i}{\partial x_j} + \frac{\partial u_j}{\partial x_j}). We define \(Am = (A_{ijkl}m_{kl})_{ij}\)
Note that we do not impose any Boundary condition by default.
They have to be set by the user using set_boundary_conditions
We do this by approximating the homogenized coefficient on every cell and using the adapted bilinear form
where \(\tilde{v_h}\) is the solution to
note that the gradient of the macro-scale function \(v_H\) appears on the RHS.
\(Y_\varepsilon(c_T) = c_T + [-\varepsilon/2, \varepsilon]^d\) is the micro mesh cell.
Parallelization: The code can in theory run in parallel. If you run the code in serial, the setup of the meshes can be arbitrary. If you want to run the code in parallel, it is only supported for now that the micro mesh lives on MPI.COMM_SELF. Parallelization is done by each process solving the cell problems for it's local part. Passing a mesh that lives on anything but MPI.COMM_SELF to the msh_micro parameter can lead to unexpected behaviour.
Notes
- It is the users responsibility to ensure that the micro meshes fit into the macro mesh cells. I.e. the shifted and scaled versions of \(Y\) \(Y_\varepsilon(c_T)\) need to fit within the element \(T\). Otherwise the interpolation of the macro scale basis functions to the micro scale may lead to unexpected behaviour.
__init__
¶
__init__(msh: mesh.Mesh, A: Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form], f: Callable, msh_micro: mesh.Mesh, eps: float, Dtheta_transpose: Callable[[fem.Constant], ufl.Form], petsc_options_global_solve: dict | None = None, petsc_options_cell_problem: dict | None = None, petsc_options_prefix: str = 'hommx_LinearElasticityHMM')
Initializes the solver, meshes and boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
msh
|
mesh.Mesh
|
The macro mesh, on which we want to solve the oscillatory PDE. |
required |
A
|
Callable[[fem.Constant, ufl.SpatialCoordinate], ufl.Form]
|
The coefficient, should be callable like: |
required |
f
|
Callable
|
The right hand side of the problem. |
required |
msh_micro
|
mesh.Mesh
|
The microscopic mesh, needs to be the unit cell. Should live on MPI.COMM_SELF since each process owns a whole copy. |
required |
eps
|
float
|
The microscopic scaling parameter. |
required |
Dtheta_t
|
transpose of the jacobian, i.e. \((\frac{\partial \theta_j}{\partial x_i})_{i, j=1, ..., d}\) |
required | |
petsc_options_global_solve
|
dict | None
|
PETSc solver options for the global solver. |
None
|
petsc_options_cell_problem
|
dict | None
|
PETSc solver options for the cell problem solver. |
None
|
petsc_options_prefix
|
str
|
Options prefix used for PETSc options. |
'hommx_LinearElasticityHMM'
|
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
solve
¶
Assemble the LHS, RHS and solve the problem
This method assembles the HMM stiffness matrix, so depending on the problem it might run for some time.
plot_solution
¶
Simple plot of the solution using pyvista.
Solve needs to be run before calling this. On parallel methods each process only plots the local part.
hommx.hmm.BasePeriodicHMM
¶
Bases: ABC
Abstract base class for periodic homogenization.
Handles MPC setup, solves one cell problem per direction, and lets subclasses provide the micro function space, bilinear form, RHS, and flux expression used to build A_hom.
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
compute_effective_tensor
¶
Solve one periodic cell problem per direction and return A_hom.
hommx.hmm.PoissonPeriodicHMM
¶
Bases: BasePeriodicHMM
Periodic homogenization for scalar diffusion (A = A(y)).
set_boundary_conditions
¶
Set boundary conditions.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
bcs
|
list[fem.DirichletBC] | fem.DirichletBC
|
Single BC or list of BCs |
required |
set_right_hand_side
¶
Sets the right hand side
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
f
|
Callable[[ufl.SpatialCoordinate], ufl.Form]
|
The right hand side of the problem. |
required |
compute_effective_tensor
¶
Solve one periodic cell problem per direction and return A_hom.
hommx.hmm._unroll_dofs
¶
unrolls blocked dofs into array indices
hommx.hmm._local_to_global_unrolled
¶
maps local to global dofs, for unrolled dofs.