amrvacio/amrio.t, you should look at subroutine
&LISTNAME ...VARIABLE DEFINITIONS... /The Fortran 90 standard for the closing line is a single slash /. Any text between too namelists is usually ignored, but on some machines such text may result in a run time error. This is compiler dependent.
Variables in a namelist can be defined by any of the following statements:
varname=value arrayname(index1,index2,..)=value arrayname=value1,value2,value3,... arrayname=multiple1*value1,multiple2*value2,...where multiple is a positive integer number. If you do not define a variable the default value is used.
The Fortran 90 standard for logical variable values is either T and F or .true. and .false., but some compilers accept only one of them.
The default values for the (many) parameters are defined in the file amrvacio/amrio.t, specifically in the subroutine readparameters.
The following namelist examples contain all the possible variables to set, choices are indicated by |. In an actual file only the parameters different from default need to be set. Constant names that should be replaced by the actual values are in capital letters. After each namelist a discussion follows.
&filelist filenameini='FILEINIBASE' filenameout='data' | 'FILEOUTBASE' filenamelog='amrvac' | 'FILELOGBASE' typefilelog='default' | 'special' snapshotini= INTEGER snapshotnext= INTEGER slicenext= INTEGER collapsenext= INTEGER shellnext= INTEGER firstprocess= F | T changeglobals= F | T resetgrid= F | T typeparIO= -1 | 0 | 1 | -2 addmpibarrier= F | T convert= F | T convert_type= 'vtuBCCmpi' | 'vtu' | 'vtuCC' | 'vtumpi' | 'vtuCCmpi' | 'vtuB' | 'vtuBCC' | 'vtuBmpi' | 'pvtumpi' | 'pvtuCCmpi' | 'pvtuBmpi' | 'pvtuBCCmpi' | 'vtimpi' | 'vtiCCmpi' | 'oneblock' | 'oneblockB' | 'postrad' | 'postradB' | 'postradBdp' | 'postradBsp' | 'hdf5' | 'user' | 'usermpi' autoconvert= F | T slice_type= 'vtuCC' | 'vtu' | 'csv' | 'dat' collapse_type= 'vti' | 'csv' shell_type= 'vtu' | 'csv' xdmf_type='Node' | 'Cell' saveprim= F | T primnames= ' ' nwauxio= INTEGER normvar: an array of size (0:nw) of DOUBLES normt= DOUBLE level_io= INTEGER level_io_min= INTEGER level_io_max= INTEGER nocartesian= F | T uselimiter= F | T writew= nw logicals, all T by default writelevel= nlevelshi logicals, all T by default writespshift: an array of dimension (1:ndim,1:2), to be filled with DOUBLES (all 0 by default) endian_swap= F | T hdf5_ini= T | F fastIO= F | T write_xdmf= T | F save_GZ= F | T /The filenameout, and filenamelog correspond to the basename of the output and log-file, respectively. With the aid of the savelist, you will request to save individual snapshots in consecutively numbered data files with names FILEOUTBASE0000.dat, FILEOUTBASE0001.dat, FILEOUTBASE0002.dat, etc. The four-digit number added to the filenameout will by default start at (the four digit equivalent of) snapshotnext (i.e. when snapshotnext=4, the first file will have extension 0004.dat). Note that this means we exclude saving more than 10000 snapshots per run, since after reaching 9999 the code will start overwriting the snapshot with 0000.dat extension. Similarly, the slicenext, collapsenext, and shellnext, options allows to specify the index of the next slice, collapse and shell output. This parameter thus has to be adjusted at restart. The logfile name will become FILELOGBASE.log, i.e. the code will automatically add the .log extension to the logfile name. In the logfile, again as often as specified in the savelist, one can save user-defined information when selecting typefilelog='special'. When a residual is calculated (steady-state computations), the value of the residual is stored in the logfile, just after the time step
dtand before all domain integrated values.
Normally, there will be no input file, as the code will need to automatically generate an appropriate grid hierarchy for time
t=0. However, one can restart from any previous FILEOUTBASE****.dat file, by setting filenameini='FILEOUTBASE', where you select, e.g. the number 0002 using snapshotini=2. Accordingly, to prevent
overwriting the earlier snapshots, use then snapshotnext=3 at the same time, so that there is no need to alter the filenameout.
As one may want to read in a previous snapshot, and then only for this
snapshot change something locally or globally in one or more variables, the logical firstprocess=T will result in a call to
in your AMRVACUSR subroutine when a restart is performed. The logical resetgrid=T will rebuild the AMR grid when a restart is performed. The logical changeglobals=T will call initglobal to specify global parameters (e.g. equation parameters, eqpar(gamma_)) again after reading in data to restarting a run.
The code can do parallel IO in a manner where all processors write in parallel to the same single file. On some systems, this mode of parallel IO is not
available (yet) or inadequately set up, and we therefore provide an alternative means of parallel IO where a master-slave process first lets all processors communicate their data piece to the master, which consecutively then outputs all data in the single file. This can be selected by setting typeparIO. The value 1 corresponds to the parallel IO, while the 0 value does a master-slave parallel IO (default). The value -1 does also a master-slave IO, and then uses fortran IO statements like open, write instead of using the MPI versions MPI_FILE_WRITE, MPI_FILE_OPEN etc.
For typeparIO=-1, the endianness of the output can be changed using the parameter endian_swap.
Since the fastest option typeparIO=1 produces corrupted files on some systems, setting typeparIO=-2 writes files in the same way as typaparIO=0, while performs a fast reading in the same way as typeparIO=1.
Throughout the code, one can enforce some additional MPI_BARRIER calls, by setting the variable addmpibarrier=T. They might slow down execution, but may help resolve unexpected issues with MPI communication on new platforms. This option is inactive by default.
The order of saving snapshots, and regridding actions through the subroutine
resetgridtree is fixed:
advance by one timestep, then regrid, then save the data. This has consequences for the optional variables beyond
The code can also be used to postprocess the .dat files (which are the only ones to be used for restarts) to some convenient data
files for later visualisation purposes. Such conversion of a single .dat file at a time is to be done with the same executable (or at least one on a possibly
different machine, but with the same settings), on a single processor (i.e. using
mpirun -np 1 ./bhac). Only selected output types can be converted in parallel, namely those whose name contains
mpi as part of the convert_type string. More details and the conversion procedure and options can be found here.
Different output formats can be chosen as well for the slice, collapse and shell output, using the parameters
and shell_type, respectively.
In this conversion mode,
the idea is to set the filenameini and the snapshotini entries together with convert=T.
You can ask the code during conversion to change from conservative to primitive variable output by setting saveprim=T, and then you should give the corresponding names for the
primitive variables in primnames. Just like wnames (the latter is defined in the
methodlist), this is a single string with space-seperated
labels for the variables. The primnames just should list names
nw primitive variables, like
for the conservative ones.
It is also possible to perform the conversion step during run of the
simulation with the switch autoconvert=T. Naturally, this
leads to more computational overhead and IO, but using the
pvtu(CC)mpi filetype, this can be reduced to a minimum.
For simulations on non-cartesian grids (cylindrical or spherical), there is the option to output the non-cartesian cell coordinates and vector components, which then forces you typically to do the conversion to cell center cartesian grid coordinates and vector variables in your visualization session. By default (i.e. nocartesian=F), the convert module does the conversion from the orthogonal to locally cartesian coordinates and vector components for you. You can overrule this default behavior by setting nocartesian=T. (note: for tecplot format, the coordinate labels are then corrected in the converted file as well).
The only variable that then further matters is convert_type. The supported options are described in Conversion options.
It is even possible to temporarily add additionally computed auxiliary variables that are instantaneously computable from the data in the FILEOUTBASE****.dat file to the converted snapshot. You should then provide the number of such extra variables in nwauxio
(see also 'Auxiliary variables in BHAC'),
and a corresponding definition for how to compute them from the available
nw variables in the subroutine
specialvar_output, whose default interface is provided in the
amrvacnul.speciallog.t module. You can there compute variables that are not in your simulation or data file, and store them in the extra slots
nw+1:nw+nwauxio of the
w variable. For consistency, you should also then add a meaningfull name to the strings that we use for identifying variables, namely
primnames, wnames. This has to be done in the subroutine
The output values are normally given in code units, i.e. in the dimensionless values used throughout the computation (in the initial condition, we always
adhere to the good practice of choosing an appropriate unit of length, time, mass and expressing everything in dimensionless fashion). One can, in the convert
stage only, ask to multiply the values by their respective dimensional unit value. normt should then be the unit for time, while the array
normvar combines the unit of length (to be stored in
normvar(0)) with corresponding units for all primitive variables in
normvar(1:nw). The corresponding values for conservative entries are computed in
saveprim=F. Note that these latter are not all independent and must be set correctly by the user. In any case, they are not in the
restart files (the ones with
.dat), and just used at conversion stage. See for details of their use the
There is a uselimiter logical variable, which is false by default, but can be used when having a 1D Cartesian grid computation, where it then influences the way in which the convert step computes corner values from cell center values. Normally, it would just take
the arithmetic average as in
w_L is the cell center value at left, and
w_R is at right. Activating uselimiter will compute
0.5(w_LC+w_RC) instead, where the left and right edge values
w_LC, w_RC are computed by limited
Note that different formats for postprocess data conversion can be added in the convert.t subroutine (see the convert entry of this manual for details).
VTK-based formats allow for saving only a part of the
nw variables, by setting the
logical array writew. The same is true for selecting levels by using the array
writelevel. Finally, you can clip away part of the domain, for output in a selected
region. This is done by filling the array writespshift. That array should use
(consistent) double precision values (between 0 and 1) that specify the percentage of the
total domain to be clipped away from the domain boundary at the minimum side, and from the
maximum side, respectively, and this for all dimensions. The array has thus a
(1:ndim,1:2), with the first entry specifying the
dimension, and the second whether you clip for minimum (1) and maximum (2) sides, respectively.
Note that in the end, only the grids that are fully contained within the clipped region are
then converted and stored in the output.
The switches level_io, level_io_min and level_io_max
are there to restrict the AMR levels of the output file at the convert stage. These switches do not work with autoconvert.
level_io=3 will coarsen/refine the data to level 3
everywhere, resulting in a uniform grid.
level_io_min limits the minimum level for output by refining
levels below level_io_min until level_io_min is reached.
level_io_max limits the maximum level of the
output file. This can be useful to visualize large datasets.
In addition to the .dat output, data can be saved also in HDF5 format. In this case, the code needs to be compiled with
#define HDF5 in
definitions.h (see the HDF5 entry of this manual for more details). These files can be used both for visualization and for restart. The HDF5-IO comes with some new parameters that are listed next. With hdf5_ini=T you will restart your simulations from an
*.hdf5 file. In some cases (mostly for data conversion) it might be useful to restart from an old
*.dat snapshot and then dump
*.hdf5. In this case set hdf5_ini=F.
*.hdf5 file also an
*.xdmf is created, which is necesarry for data visualisation with Visit or Paraview. Thdoes not happen in parallel and might take a considerable fraction of the IO-time. If this is not required (e.g. you only want to checkpoint), you can set write_xdmf=F. Otherwise, the default is write_xdmf=T.
The data can be saved both corner or vertex centered by choosing xdmf_type='Node' or 'Cell', respectively. When visualized, 'Node' data will be displayed smooth, but shifted by half a grid-cell.
The parameter save_GZ controls whether ghost zones are saved. If setting fastIO=T, you will see a considerable speed-up in writing the output, but also double your memory footprint (data will be aggregated into buffers before dumping). If memory is no problem for your simulation, it is highly recommended to use this switch. If set to false (default) each process dumps its data block by block (but still parallel for all the blocks). In case of simulations, where not all processes control the exact same number of blocks this might lead to some independent data-dumps and thus increase IO-time.
The HDF5 output can also be used for conversion, by setting convert_type='hdf5' in order to convert old
*.dat data. This mostly makes sense in combination with hdf5_ini=.false..
Finally, the parameters saveprim and typeparIO are ignored by the HDF5-IO, and data is always saved as primitives and using the parallel HDF5 writing.
&savelist ditsave(FILEINDEX)=INTEGER dtsave(FILEINDEX)=DOUBLE itsave(SAVEINDEX,FILEINDEX)=INTEGER tsave(SAVEINDEX,FILEINDEX)=DOUBLE nslices=INTEGER slicedir(INTEGER)=INTEGER slicecoord(INTEGER)=DOUBLE collapse(INTEGER) = F | T collapseLevel = INTEGER nshells = INTEGER shellcoord(INTEGER) = INTEGER nxShell1 = INTEGER nxShell2 = INTEGER /You can specify the frequency or the actual times of saving results into the log, (consecutive) output, slice files, collapsed views, calls to the analysis subroutine, and shell files, which are identified by their FILEINDEX 1, 2, 3, 4, 5, and 6 respectively. The slices are parametrized by values of nslices, slicedir and slicecoord, more information on slice output can be found in slice output. The parameters collpse and collapseLevel belong to the collapse feature, which is described in detail in collapsed output. Some info on the analysis feature is given in analysis. The shell output is described in shells, and is controlled by the parameters nshells, shellcord, and nxShell^D.
The times can be given in timesteps or physical time. Typical examples: ditsave=1,10 saves results into the log file at every timestep, and will generate a .dat output file every 10-th step (however, the number at the end of the 'datamr/FILEOUTBASE****.dat' file will increase by one from firstsnapshot, as explained above). dtsave=0.1,12.5 saves into the log file at times 0.1,0.2,... and will generate a .dat output file at time 12.5,25,37.5,... , assuming that we start at t=0. ditsave(1)=10 tsave(1,2)=5.2 tsave(2,2)=7. will save info into the log file every 10-th timestep and snapshots at t=5.2 and 7. Actually, the first timestep when physical time is greater than 5.2 (or 7.0) is saved. Mixing itsave and tsave is possible, mixing dtsave (or ditsave) with tsave (or itsave) for the same file should be done with care, since dtsave (and ditsave) will be offset by any intervening tsave (and itsave). However, one may want to save snapshots more frequently at the beginning of the simulation. E.g. tsave(1,2)=0.1 tsave(2,2)=0.25 tsave(3,2)=0.5 dtsave(2)=0.5 could be used to save snapshots at times 0.1, 0.25, 0.5, 1, 1.5, ... etc.
If no save condition is given for a file you get a warning,
the final output is always saved after the stop condition has been
fulfilled. If itsave(1,2)=0 is set, the initial state is saved before
&stoplist itmax = INTEGER tmax = DOUBLE tmaxexact = F | T dtmin = DOUBLE it = INTEGER t = DOUBLE treset = F | T itreset = F | T residmin = DOUBLE residmax = DOUBLE typeresid = 'relative' | 'absolute' /You may use an upper limit itmax for the number of timesteps and/or the physical time, tmax. If tmaxexact=T is set, the last time step will be reduced so that the final time 't' is exactly 'tmax'.
Numerical or physical instabilities may produce huge changes or very small time steps depending on the way dt is determined. These breakdowns can be controlled by either setting a lower limit dtmin for the physical time step, which is useful when dt is determined from the courantpar parameter. If AMRVAC stops due to dt < dtmin, a warning message is printed.
You have to specify at least one of tmax, itmax. AMRVAC stops execution when any of the limits are exceeded. The initial time value t and integer time step counter it values can be specified here. However, when a restart is performed from a previous .dat file, the values in that file will be used unless you enforce their reset to the values specified here by activating the logicals treset=T, or itreset=T.
In case the code is used for computing towards a steady-state, it is useful to quantify the evolution of the residual as a function of time.
The residual will be computed taking account of all
(see also AMRVAC_Man/mpiamrvac_nw.html), over all grids. You can tell the code to stop computing
when a preset minimal value for this residual is reached, by specifying residmin=1.0d-6 (for example, a rather stringent value when
all variables are taken into account). Similarly, setting residmax=1.0d8 will force the code to stop when the residual change
becomes excessively large (indicating that you will not reach a steady state for the problem at hand then, without explaining you why).
The residual is computed as specified in the subroutine
getresidual in the
setdt.t module, and you may want to
inspect what it actually computes there (and perhaps modify it for your purposes), and see the distinction between
typeresid='absolute'. When either residmin or residmax is specified here, the value of the
residual will be added to the logfile.
&methodlist wnames = 'STRING' fileheadout = 'STRING' typeadvance ='twostep' | 'onestep' | 'threestep' | 'rk4' | 'jameson' | 'fourstep' | 'ssprk43' | 'ssprk54' typefull1=nlevelshi strings from: 'tvdlf' | 'tvdlfpos' | 'hll' | 'source' | 'nul' typepred1 = nlevelshi strings from: 'tvdlf' | 'tvdlfpos' | 'hll' | 'source' | 'nul' typelimiter1 = nlevelshi strings from: 'minmod' | 'woodward' | 'superbee' | 'vanleer' | 'albada' | 'ppm' | 'mcbeta' | 'koren' | 'cada' | 'cada3' | 'mp5' | 'weno5' | 'wenoZP' typegradlimiter1= nlevelshi strings from: 'minmod' | 'woodward' | 'superbee' | 'vanleer' | 'albada' | 'ppm' | 'mcbeta' | 'koren' | 'cada' | 'cada3' mcbeta = 1.4D0 | DOUBLE typegrad = 'central' | 'limited' typediv = 'central' | 'limited' tvdlfeps = DOUBLE BnormLF = T | F typeinversion ='1DW' | '2DW' | '2D1D' | '2D1DEntropy' | '1D2DEntropy' | '1DEntropy' | '2DEntropy' | 'Entropy' typeemf = 'average' | 'uct1' | 'uct2' | 'uct2+av' | 'none' clean_init_divB = 'vecpot' | 'mg' | 'none' typeaxial = 'slab' | 'cylindrical' | 'spherical' strictgetaux = F | T nflatgetaux = INTEGER tlow = DOUBLE maxitnr = INTEGER tolernr = DOUBLE absaccnr = DOUBLE dmaxvel = DOUBLE smallrho = DOUBLE smallp = DOUBLE /
fileheadout is the header line in the output file, and is an identifyer used in the (converted) data files. Both wnames and fileheadout are thus only relevant within amrio.t and convert.t, and only have effect for the later data visualization (e.g. when fileheadout='test_mhd22' is specified, the Idl macros will deduce from the name that it is for a 2D MHD run).
There are also higher order Runge-Kutta type methods, like the 'fourstep' (Hans De Sterck variant, 'jameson' for the Jameson variant). These higher order time integration methods can be most useful in conjunction with higher order spatial discretizations like a fourth order central difference scheme (currently not implemented). Other available options are a third order Runge-Kutta (RK) integrator ('threestep', Gottlieb and Shu 1998), the classical fourth order RK integrator ('rk4') and the strong-stability preserving s-step, p-order RK methods SSPRK(4,3) and SSPRK(5,4) by (Spiteri and Ruth 2002).
The array typefull1 defines a spatial discretization for the time integration per activated grid level (and on each level, all variables use the same
discretization). In total,
nlevelshi (i.e. the highest number of levels allowed) methods must be specified. By default
nlevelshi=13 and these are then all set by
typefull1=13*'tvdlf'. Different discretizations
can be mixed across the
mxnest activated grid levels (but the same stepping scheme
must apply for all of the schemes).
Setting for a certain level the typefull1 to 'nul' implies doing no advance at all, and 'source' merely adds sources. These latter two values must be used with care, obviously, and are only useful for testing source terms or to save computations when fluxes are known to be zero.
The typepred1 array is only used when typeadvance='twostep' and specifies
the predictor step discretization, again per level (so
nlevelshi strings must be set).
By default, it contains
typepred1=13*'tvdlf' (default value
nlevelshi=13), and it then deduces e.g. that 'cd' is predictor for 'cd', 'hancock' is predictor for both 'tvdlf' and 'tvdmu'. Check its default behavior in the
Thus typepred1 need not be defined in most cases,
however typefull1 should always be defined if methods other than 'tvdlf' are to be used.
The typegradlimiter1 is the selection of a limiter to be used in computing gradients (or divergence of vector) when the typegrad=limited (or typediv=limited) is selected. It is thus only used in the gradientS (divvectorS) subroutines in geometry.t. The parameter of the mcbeta limiter is set by means of mcbeta and its default value is 1.4.
For calculations involving constraint dampening on the magnetic field, BnormLF=T actually uses the Lax-Friedrichs flux expression for the normal magnetic field component in the HLL method. This improves robustness.
'1DW'performs a 1D con2prim. See Porth et al. (2017) for a description of the methods. For the combinations, for example
'2D1D'will try first a 2D inversion and then attempt a 1D inversion as backup. The most robust combination is obtained by entropy backups (discarding energy equation in favor of an advected entropy tracer) which is activated e.g. with
'2D1DEntropy'. For entropy based inversions to work, you need to switch on entropy advection in
#define ENTROPYand allow for one more variable which expects boundary- and initial conditions.
#define STAGGEREDis set in
definitions.h), you can control the method of EMF recovery with
'average'corresponds to Balsara-Spicer reconstruction and when not using staggered mesh (
definitions.h), it leads to Toth Flux-CT.
'uct1'is Londrillo & Del Zanna (2004) and
'uct2'corresponds to Del Zanna et al. (2007). See Olivares et al. (2019) for details. When
typeemf='none'no constrained transport is performed. Set this when using Dedner cleaning (activated via
clean_init_divB controlls the method by which the initial magnetic field is handled. It is highly recommended to initialize the magnetic field from a vectorpotential (by filling the subroutine
amrvacusr.t). Re-calculation of the initialized magnetic field is required at coarse-fine interfaces and when
clean_init_divB='vecpot' (default), this is consistently done by BHAC.
When no vector potential is available, one can also use a multi-grid cleaning strategy which is activated by
clean_init_divB='mg'. Finally, no cleaning of the initial magnetic field is done when
-z=flags have a meaning and should be used at
$BHAC_DIR/setup.pl, to denote the order of the second and third coordinate. By default, typeaxial='slab' and Cartesian coordinates are used (with translational symmetry for ignorable directions).
For 2.5D and 'cylindrical' the grid and the symmetry depend on the settings for the -phi and -z flags. For example, when setting -d=23 -z=2 -phi=3, a cartesian grid is used in a poloidal plane, with axial symmetry. The vector components then appear in the r,z,phi order.
One can use 2.5D on a circular grid also with translational symmetry in the third, i.e. axial, direction by the use of
$BHAC_DIR/setup.pl -d=23 -phi=2 -z=3. The vector components then appear in the r,phi,z order.
For 2.5D (-d=23 -z=2 -phi=3) and 'spherical', the coordinates denote radial and polar angle in a poloidal plane and the out-of-plane phi- component as third direction all depending on (r,theta) alone. This means that 2.5D and 'spherical' implies the use of a polar grid in a poloidal cross-section (i.e. one containing the symmetry axis) and axial symmetry. Thus in spherical coordinates, the "z"- direction is synonymous with the "theta"-direction.
You can also run "spherical" simulations in the equatorial plane by setting
$BHAC_DIR/setup.pl -d=23 -phi=2 -z=3.
In 3D the choice of curvilinear grid is cartesian for 'slab', and the usual Cylindrical and spherical coordinate systems when setting one of those latter values. For full 3D simulations, one should set the order of the vectors as follows:
$BHAC_DIR/setup.pl -d=33 -z=2 -phi=3.
strictgetaux=T, the code will stop after the inversions and their specified backups (e.g. 1D, 2D, entropy-based inversion) have failed.
strictgetaux=F, the default measure is to replace the primitive values of the faulty cell with an average over the neighbor cells. The width of this environment is set by the integer
Alternatively, the user can also implement a subroutine
correctaux_usr which allows to set e.g. all values to floors (better not to touch the magnetic field though). This subroutine should be part of the
amrvacusr.t file. If it is implemented, the code will first try to fix the cell with
correctaux_usr and call
correctaux (neighbor average) for cells not fixed by the user.
Another measure to ensure robustness is controlled by
tlow is not set to zero (default), after each con2prim, the subroutine
fixp_usr is called (part of
amrvacusr.t). It allows to implement floors and thus guarantee positivity of density and pressure. This is commonly used in black-hole torus simulations, see e.g. the setup
FMtorus provided with this release. The value of
tlow can also be used to control thresholds of floors (implented yourself in
fix_usr) in the parameter-file.
maxitnr=100. This newton raphson has a tolerance parameter and absolute accuracy parameter, by default set to
absaccnr=1.0d-13. These can be changed if needed.
dmaxvel=1.0d-8 default value is used in the process to define
a maximal velocity allowed, namely 1-dmaxvel (where velocities are always below 1, which is the speed of light value).
The values of
smallrho are used to set bounds for consistency-checking in con2prim.
&boundlist dixB= INTEGER typeB= 'cont','symm','asymm','periodic','special','specialT','noinflow','limitinflow' ratebdflux = DOUBLE internalboundary = F | T typeghostfill= 'linear' | 'copy' | 'unlimit' typegridfill= 'linear' | 'other' primitiveB(iside,idim) = F | T /The boundary types have to be defined for each variable at each physical edge of the grid, i.e. in 2.5D (no tracer-fluid, no entropy):
rho,s1,s2,s3,tau,b1,b2,b3,lfac,xiat the left boundary;
rho,s1,s2,s3,tau,b1,b2,b3,lfac,xiat the right;
rho,s1,s2,s3,tau,b1,b2,b3,lfac,xiat the bottom; finally
rho,s1,s2,s3,tau,b1,b2,b3,lfac,xiat the top boundary. In general, the order is xmin, xmax, ymin, ymax, zmin, and zmax.
By default, ghost cells will be filled with information from the conserved domain variables. For added robustness, you can choose to employ boundary conditions on primitive variables instead by setting
primitiveB(iside,idim) = T where iside is either 1 or 2 (for min, max boundary) and idim corresponds to the dimension-index of the boundary, i.e. 1,2 or 3, for first, second and third dimension respectively.
The default number of ghost cell layers used to surround the grid (and in fact each grid at each level and
location) is set by default to dixB=2.
It is important to note that staggered mesh currently requires this to be an even number. Hence if needed, this value can be increased, e.g. for the higher order limiters like
ppm, weno, mp5, ... which then require four ghost cells.
The default boundary type is cont for all variables and edges, it means that the gradient (of the conservative variables) is kept zero by copying the variable values from the edge of the mesh into the ghost cells.
Other predefined types are the symm and asymm types, which are mostly used for reflective boundaries, or at symmetry axes of the domain (the polar or equatorial axis, e.g.). One then typically makes the momentum orthogonal to the given boundary antisymmetric (asymm), the rest of the variables symm. These boundary types can also be used to represent a perfectly conducting wall (the orthogonal component of the magnetic field should be antisymmetric, the transverse component symmetric) or the physical symmetry of the physical problem.
The case of periodic boundaries can be handled with setting 'periodic' for all variables at both boundaries that make up a periodic pair. Hence triple periodic in 3D MHD where 8 variables are at play means setting
For 3D cylindrical and spherical grid computations, the singular polar axis is trivially handled using a so-called
pi-periodic boundary treatment, where periodicity across the pole comes from the grid cell diagonally across
the pole, i.e. displaced over pi instead of 2 pi. These are automatically recognized from the typeaxial setting, and
the corresponding range in angle phi must span 2 pi for cylindrical, and theta must then start at zero (to include the north pole) and/or end at pi (for the south pole)
for spherical grids. The user just needs to
set the typeB as if the singular axis is a symmetry boundary (using symm and asymm combinations).
The possibility exists to put a boundary condition mimicking zero or reduced inflow across the computational boundary, by selecting
typeB='limitinflow' for the momentum vector components of your particular application. This is in principle only
relevant for the momentum component locally perpendicular to the boundary (for others a continuous extrapolation is done).
noinflow, limitinflow extrapolates values that are
outwardly moving continuously, while clipping all values that are inwardly advecting momentum to zero (noinflow) or to a user-controlled fraction of
the inward momentum (limitinflow). The latter fraction is set by
ratebdflux which is 1 by default, and should be set to a value between zero
and 1 accordingly.
The special type is to be used for setting fixed values, or any time dependent or other more complicated
boundary conditions, and results in a call to the specialbound_usr
subroutine which has to be provided by the user in the
The variables with special boundary type are updated last
within a given boundary region, thus the subroutine may use the updated
values of the other variables. The order of the variables is fixed by the equation module chosen, i.e.
rho m1 m2 m3
e b1 b2 b3 for 3D MHD, but by setting all typeB entries for a
certain boundary region to special, one is of course entirely free to
fill the boundary info in a user-defined manner.
The setting specialT allows to freely prescribe tangential components of a staggered variable. This is a slight hack and hence to specialbound_usr it is signalled that staggered variables should be dealt with now by reversing the sign of the
iB parameter. The
iB parameter is used to communicate which boundary to apply the conditions on (1,2,3,4,... in the ordering described at the beginning of this section). Right after
specialT, the normal components are filled by utilizing divB=0.
Internal boundaries can be used to overwrite the domain variables with specified values. This is activated with the switch internalboundary=T. Internally, these are assigned before the ghost-cells and external boundaries are applied (in subroutine routine get_bc). The user can provide conditions on the conserved variables depending on location or time in the subroutine bc_int which is defaulted in amrvacnul/specialbound.t.
The typeghostfill='linear' implies the use of limited linear reconstructions in the filling of ghost cells for internal boundaries that exist due to the AMR hierarchy. A first order 'copy' can be used as well, or an unlimited linear reconstruction by setting it to 'unlimit'. To retain second order accuracy, at least the default 'linear' type is needed.
&amrlist mxnest= INTEGER nxlone1= INTEGER nxlone2= INTEGER nxlone3= INTEGER dxlone1= DOUBLE dxlone2= DOUBLE dxlone3= DOUBLE xprobmin1= DOUBLE xprobmax1= DOUBLE xprobmin2= DOUBLE xprobmax2= DOUBLE xprobmin3= DOUBLE xprobmax3= DOUBLE errorestimate= INTEGER nbufferx1= INTEGER nbufferx2= INTEGER nbufferx3= INTEGER amr_wavefilter= nlevelshi DOUBLE values tol= nlevelshi DOUBLE values tolratio= nlevelshi DOUBLE values flags= INTEGER array, with at most nw+1 entries wflags= DOUBLE array, with at most nw values that must sum up to 1.0d0 prolongprimitive= F | T coarsenprimitive= F | T restrictprimitive= F | T typeprolonglimit= 'default' | 'minmod' | 'woodward' | 'mcbeta' | 'koren' tfixgrid= DOUBLE itfixgrid= INTEGER ditregrid= INTEGER /
nlevelshiand minimally equal to 1 (which is the default value implying no refinement at all, but possibly a domain decomposition when the domain resolution is a multiple of the maximal grid resolution controlled by the -g= flag of $AMRVAC_DIR/setup.pl). The parameter
nlevelshi=8by default, a value set in
mod_indices.t, so that if more than 8 levels are to be used, one must change this value and recompile. Note that when
mxnest>1, it is possible that during runtime, the highest grid level is temporarily lower than mxnest, and/or that the coarsest grid is at a higher level than the base level.
The computational domain is set by specifying the minimal and maximal coordinate value per direction in the
xprob^L settings. When cylindrical or spherical coordinates are selected with typexial, the angle ranges (for phi in the cylindrical case, and for both theta and phi in the spherical case) are to be given in 2 pi units.
The base grid resolution (i.e. on the coarsest level 1) is best specified by
providing the number of grid cells per dimension to cover the full computational domain set by the
This is done by specifying these numbers in
nxlone^D, where there are as many integers to be set as the dimension of the problem.
Note that it is necessary to have a consistent combination of base grid resolution and the
-g= setting for
$AMRVAC_DIR/setup.pl: the latter specifies the maximal individual grid resolution which includes ghost cells at each side, while the
nxlone^D resolution must thus be a multiple of the individual grid resolution without the ghost cells included. An alternative way to specify the domain resolution is to give the base grid cell size directly
dxlone^D, but then again the same restrictions apply, and one must be sure that the step size properly divides the domain size (from the
xprob^L pairs). It is advocated to always use the integer setting through
nxlone^D, from which the code automatically computes the corresponding
When errorestimate=0, all refinement will only be based on the user-defined criteria to be coded up in subroutine
When errorestimate=1, we used to follow the Richardson procedure. This is deprecated now as no-one has used it in years.
When errorestimate=2, we simply compare the previous time level t_(n-1) solution with the present t_n values, and trigger refinement on relative differences.
When errorestimate=3, the default value, errors are estimated using current t_n values and their gradients following Lohner's prescription. In this scheme, the amr_wavefilter coefficient can be adjusted from its default value 0.01d0. You can set different values for the wavefilter coefficient per grid level. This error estimator is computationally efficient, and has shown similar accuracy to the Richardson approach on a variety of test problems.
In all three latter cases, a call to the user defined subroutine
specialrefine_grid follows the error estimator, making it possible to use this routine for augmented user-controlled refinement, or even derefinement.
Depending on the error estimator used, it is needed or advisable to additionally provide a buffer zone of a certain number of grid cells in width, which will
surround cells that are flagged for refinement by any other means. It will thus trigger more finer grids.
nbufferx^D=2 is usually sufficient. It can never be greater than the grid size specified with the -g setting of
$BHAC_DIR/setup.pl. For Lohner's scheme, the buffer can actually be turned off completely by setting nbufferx^D=0 which is default value.
nwvariables (which may include auxiliary variables such as predefined in the special relativistic modules), the number of variables to be used in the estimator is set by specifying it in
flags(nw+1). Correspondingly, the first
flags(nw+1)entries for the flags array select their index number, and corresponding weights are to be given in wflags. E.g., 3D MHD has
nw=8, and one can select refining based on density, energy, and first component of the magnetic field by setting flags(9)=3, flags(1)=1, flags(2)=5, flags(3)=6, since we predefined the order
rho m1 m2 m3 e b1 b2 b3. The weights wflags(1), wflags(2), and wflags(3), must be positive values between 0.0d0 and 1.0d0, and must add to unity. By default, only the first conserved variable (typically density) is used in the comparison.
In the comparison involving the above selected variables, when the total error exceeds the value set by tol, the grid is triggered for refining. Reversely, if the error drops below tolratio * tol, the grid is coarsened. The user must always set a (problem dependent) value for tol (below 1), while the default value for tolratio=1.0d0/8.0d0 has shown to be a rather generally useful value. You can set tolerance values that differ per refinement level.
The parameters tfixgrid, itfixgrid are available to fix the AMR hierarchical grid from a certain time onwards (tfixgrid) or iteration (the it-counter for the timesteps) onwards (itfixgrid). This may be handy for steady-state computations, or for those cases where you know that the initial conditions and physical setup is such that the AMR structure at t=0 will be optimal for all times.The parameter ditregrid is introduced to reconstruct the whole AMR grids once every ditregrid iteration(s) instead of regridding once in every iteration by default.
¶mlist dtpar= DOUBLE courantpar= DOUBLE typecourant= 'maxsum' | 'summax' | 'minimum' slowsteps= INTEGER /
Further restrictions on the time step can be put in the getdt_special subroutine in the AMRVACUSR module.
The typecourant='maxsum' means that the time step limit for the CFL conditions takes the maximum over a summed contribution to the maximum physical propagation speed for all dimensions. The detailed formulae are found in setdt.t.
If the slowsteps parameter is set to a positive integer value greater than 1, then in the first slowsteps-1 time steps dt is further reduced according to the
2 dt'= dt * [ 1 - (1-step/slowsteps) ]formula, where step=1..slowsteps-1. This reduction can help to avoid problems resulting from numerically unfavourable initial conditions, e.g. very sharp discontinuities. It is normally inactive with a default value -1.