AMReX: Adaptive Mesh Refinement EXascale

AMReX is an adaptive mesh refinement software framework for building massively parallel block-structured adaptive mesh refinement (AMR) applications.

It was developed at Lawrence Berkeley National Laboratory (LBNL), National Renewable Energy Laboratory, and Argonne National Laboratory (ANL) as part of the Block-Structured Adaptive Mesh Refinement (AMR) Co-Design Centre in the United States Department of Energy's Exascale Computing Project.

It was designed to be used on Exascale computing system. These are systems that are capable of one exaFLOPS or a billion billion floating point operations per second.

The main goal for AMReX is to allow computational power to be focused on the most critical parts of the problem in the most computationally efficient way. It is the latest development in AMR by Chombo and BoxLib developers.


  • 200 times faster than previous implementations of AMR
  • C++ and Fortran support
  • 1D, 2D, 3D support
  • Support for cell-centered, face-centered, edge-centered, and nodal data
  • Support for hyperbolic, parabolic, and elliptic solves on hierachical adaptive grid structure
  • Optional subcycling in time for time-dependent PDEs
  • Support for particles
  • Parallelisation via OpenMP or MPI
  • Parallel I/O
  • Plotfile format supported by AmrVis, VisIt, Paraview, and yt
  • Open Source and contributions are welcome
  • Regular updates

What is a grid and why is it needed?

A grid is used in computational simulations in order to turn a physical problem into a mathematical model that can be solved by a computer. In computational fluid dynamics the problem domain is usually split up into a finite number of control volumes. Then the conservation of mass (continuity), conservation of momentum (Navier-Stokes), and conservation of energy are applied to each of these control volumes. These equations are a form of partial differential equation which cannot be solved directly. Instead they are discretised into a set of linear algebraic equations which use the grid to allow them to be solved numerically.

The grid in a computational fluid dynamics simulation is a crucial part of any simulation and it is important to ensure that it is sufficiently refined around key points of interest such that the answer that is output is valid. Usually refinement takes place on a global level after the simulation has run. AMReX allows this grid refinement to take place during the simulation and also only refine the areas where higher resolution is needed, this is very useful in time-dependent simulations where the flow is constantly changing and the grid can be hard to adapt.

Simulation procedure

A typical simulation procedure would start with a physical problem that needs to be modelled. The geometry of this problem is then converted into a grid where the conservation equations can be applied to it. These equations are then solved iteratively over a number of time-steps until a desired accuracy is achieved.

Why use AMReX?

Since grid refinement is an important part of the simulation process it is useful to have tools which help us with this. Typically if you want to refine a grid then you would usually do this after the simulation, and refine it globally or to a local fixed area. It can be hard to predict exactly where you want more resolution or less resolution, especially for time dependent solutions where for example you have moving vortices. This means you are left with a grid which can be too fine in the region where the vortices are in order to capture their details. This is where AMReX comes in.

AMReX allows these points of interest such as vortices to be tracked dynamically as the simulation progresses and only adapt the mesh in the areas necessary. Since the resolution of the grid is directly proportional to the computational power this can allow for a vast reduction in computational cost whilst still focusing resolution on the important parts of the problem.



The following exercises are conducted within a virtual machine called feeg6003_amrex.ova that can be downloaded from here. The password for this virtual machine is feeg6003

For instructions on setting up a virtual machine please see this blog post.

Alternatively the source code can be downloaded from GitHub:

The slides used for this workshop can be downloaded from here

Example HelloWorld

To begin we will start with a simple Hello World program that will help us to understand the structure of an AMReX program.

To begin open a terminal of your choice and first change directory into the AMReX feeg6003 folder. This is where we will be working for these exercises:

cd /home/feeg6003/Documents/amrex-master/feeg6003/

Navigate to the HelloWorld folder:

cd Helloworld

Inside this folder there will be a C++ file called main.cpp and this contains the HelloWorld code. Open this file using a text editor of your choice:

#include <AMReX.H>
#include <AMReX_Print.H>

int main (int argc, char* argv[])
    amrex::Print()<< "Hello world from AMReX Version " << amrex::Version() << "\n";

Note that initialize and finalize use the US spelling.

Before we compile this code lets go through line by line and discuss what each line does.

First the header #include <AMReX.H> allows the various AMReX commands and functions to be used. The header #include <AMReX_Print.H allows the print function to be used within AMReX.

Going into the main program we find the line amrex::Initialize(argc,argv); which initializes AMReX in a similar way to the MPI implementation. All AMReX codes will start with this function.

The next line we have our print statement:

amrex::Print()<< "Hello world from AMReX Version " << amrex:Version() << "\n"

The first part of the line amrex::Print() is a print function defined by the AMReX routine allowing us to print statements from within AMReX. Following this we have << which is called the insertion operator in C++. It inserts the data that follows it into the stream that proceeded it. It allows the value of variables to be printed to the screen in a similar way to the printf() function in C works when used with data type specifiers. In addtion << can be chained as seen in the example above.

The function amrex::Version() gets the current AMReX version. Finally we end the program with amrex::Finalise(); and this ends the AMReX routine.

To compile the HelloWorld code simply type: make in the command line

This will generate a file called main2d.gnu.DEBUG.ex this tells us that the GNU compiler with debug options set by AMReX is used. You will also notice that it tells us that the executable is a 2d case. Let's look at the GNUmake file to see what options we have:

AMREX_HOME ?= ../../


DIM   = 2

COMP  = gnu


include $(AMREX_HOME)/Tools/GNUMake/Make.defs

include ./Make.package
include $(AMREX_HOME)/Src/Base/Make.package

include $(AMREX_HOME)/Tools/GNUMake/Make.rules

Here we can see there are two DEBUG flags available, these can be used for debugging purposes, however for the purpose of this workshop they will not be covered. The DIM = 2 option allows us to specify the number of dimensions, either 1, 2 or 3d. Change this to DIM = 3

The lines USE_MPI and USE_OMP allow us to use either MPI or OpenMP for parallelisation. Change USE_MPI = FALSE to USE_MPI = TRUE. Your GNUMake file should now look like the following:

AMREX_HOME ?= ../../


DIM = 3

COMP = gnu


include ./Make.package
include $(AMREX_HOME)/Src/Base/Make.package

include $(AMREX_HOME)/Tools/GNUMake/Make.rules

Go back to the command line and type make and after compilation has completed you should see a new executable file called main3d.gnu.DEBUG.MPI.ex. As you can see the file name contains the changes we added to the make file. Now that we have enabled MPI we can run on the 2 processor cores assigned to the virtual machine. This can be run by typing the following command:

mpirun ./main3d.gnu.DEBUG.MPI.ex

Try further experimenting with different options in the GNUmakefile and see how this affects the output executable

Example Velocity

This tutorial contains an Adaptive Mesh Refinement (AMR) advection code, there are two examples, Velocity and Vortex: this example applies a uniform velocity to the particle cluster.

The code is structured into two directories: 'Source' which contains the general source files for the program, and 'Exec' which is where we will be working to create an executable.

First navigate into the working directory:

cd /home/feeg6003/Documents/amrex-master/feeg6003/Exec/Velocity

In here we can see the GNUMakefile like in the previous Helloworld example, open that up to see what variables have been set. We can now just run make from within the /Velocity directory to create an executable, we should see a file named main2d.gnu.MPI.ex.

In order to run the executable, we need to specify an inputs file:

./main2d.gnu.MPI.ex inputs

That should produce a number of plot files for the timesteps, we can modify a number of the running parameters by changing the inputs file: domain size, time step and AMR parameters are all controlled from here.

Next we want to view these files to see whats going on. You can use a number of visualisation tools to view these (Visit, Paraview) but for this virtual machine we used a program called AMRVis. In order to use AMRVis, we need to call the amrvis2d.gnu.ex executable within the Amrvis directory. To make this easier to use in the future, we can use the alias command to create a shortcut command, for example:

alias amrvis2d='/home/feeg6003/Documents/Amrvis/amrvis2d.gnu.ex'  

We can now open up individual plot files by running 'amrvis2d filename'. If, however, we want to animate a sequence of plots you can run the following command:

amrvis2d -a plt*

Try cycling through the plots, you should be able to see the adaptive mesh following the cluster. Now you've run your first simulation! Open up the inputs file and try changing some of the parameters, see how that affects your plot files. Can you increase the number of adaptive mesh levels?

Exercise Vortex Advection

Now you've seen how to run the velocity simulation, your task is to do the same with the vortex example!