-
Notifications
You must be signed in to change notification settings - Fork 0
/
numerical-gravity.tex
103 lines (92 loc) · 5.63 KB
/
numerical-gravity.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
\subsection{Gravity}
\label{sec.gravity}
Solving for the accelerations of the cells and particles on the grid
due to self-gravity involves three steps: (i) computing the total
gravitating mass, (ii) solving for the gravitational potential field
with the appropriate boundary conditions, and (iii) differencing the
potential to get the acceleration, and, if necessary, interpolating
the acceleration back to the particles. These steps are described in
detail below.
First, the massive (dark matter and star) particles are distributed
onto the grids using the second-order cloud-in-cell (CIC)
interpolation technique \citep{Hockney88} to form a
spatially-discretized density field $\rho_{\rm DM}$. During the CIC
interpolation, particle positions are (temporarily) advanced by $0.5
v^n \Delta t$ so that we generate an estimate of the time-centered
density field. Particles on subgrids within the grid's volume are
also added to its gravitating field using the same method. In
addition, since the gravitating field for a grid is defined beyond the
grid edges (see below), massive particles from sibling grids and sub
grids that lie within the entire gravitating field are used. This
step can involve communication.
Next, we add the baryonic grid densities in a similar fashion. In
particular, we treat baryonic cells as virtual CIC particles that are
are placed at the grid center but are advanced by $0.5 v^n \Delta t$
in order to approximately time-center the gravitating mass field.
Cells that are covered by further-refined grids are treated in a
similar way (i.e., we also use the subgrid cells as virtual CIC
particles). This procedure results in a total gravitating mass field
$\rho_{\rm total}^{n+1/2}$.
To compute the potential field from this gravitating mass field on the
root grid, we use a a fast Fourier transform. For periodic boundary
conditions, we can use either a simple Greens function kernel of
$-k^{-2}$, or the finite difference equivalent \citep{Hockney88}:
\begin{equation}
G(\myvec{k}) = - \frac{\Delta x}{2 \left( \sin(k_x \Delta x/2)^2 + \sin(k_y \Delta y/2)^2 + \sin(k_z \Delta z/2)^2 \right) }
\end{equation}
where $k^2 = k_x^2 + k_y^2 + k_z^2$ is the wavenumber in Fourier space
and the potential is calculated in k-space as usual with
$\tilde{\phi}(k) = G(k) \tilde{\rho}(k)$.
For isolated boundary conditions, we use the James method
\citep{James77}. In this case, the Greens function is generated in
real-space so as to have the correct zero-padding properties and then
transformed into the Fourier domain. In both cases, the potential is
then transformed back into the real domain to get potential values at
the cell centers. These are differenced with a two-point centered
difference scheme to obtain accelerations at the cell centers (except
if we are using the staggered \zeus-like solver, in which case the
accelerations are computed at the cell faces to match the velocities).
Particle accelerations are obtained using a (linear) CIC interpolation
from the grid.
In order to calculate more accurate potentials on the subgrids, \enzo\
uses a similar but slightly different technique from the root grid.
The generation of the total gravitating mass field is essentially
identical, using CIC interpolation for both the particles and baryons,
including subgrids as before. To compute the potential on subgrids,
however, we use the standard seven-point (in three dimensions)
second-order finite difference approximation to Poisson's equation.
Boundary conditions are then interpolated from the potential values on
the parent grid. We use either tri-linear interpolation or a natural
second-order spline for this: both methods give similar results, but
the default is the tri-linear interpolation, which empirically
provides a resonable compromise between speed and accuracy. The
potential equation on each subgrid is then solved with the given
Dirichlet boundary conditions with a multigrid relaxation technique.
This is applied to each subgrid separately.
The region immediately next to the boundary can contain unwanted
oscillations \citep[e.g.,][]{Anninos94}, and so we use an expanded
buffer zone around the grid, of size three parent grid boundary zones
(so typically six refined zones for a refinement factor of 2). The
density is computed in this region and the potential solved, but only
the region that overlaps with the active region of the grid itself is
used to calculate accelerations.
Simply interpolating the potential without feeding it back to higher
levels leads to errors in the potential at more refined levels, due to
the build-up of errors during the interpolation of coarse boundary
values. In addition, neighboring subgrids are not guaranteed to
generate the same potential values because of the lack of a coherent
potential solve including the whole grid hierarchy. In an attempt to
partially alleviate this problem, we allow for an iterative procedure
across sibling grids, in which the potential values on the boundary of
grids can be updated with the potential in `active' regions of
neighboring subgrids. To prevent overshoot, we average the potential
on the boundary and allow for (by default) 4 iterations, with the
number of iterations determined by a parameter specified at runtime.
This procedure can help in many cases, but does not necessarily
produce a coherent solution across all grids and so does not
completely solve the problem; we are working on a slower but more
accurate method that does a multigrid solve across the whole grid
(Reynolds \etal, in preparation).
At this point it is useful to emphasize that the effective force
resolution of an adaptive particle-mesh calculation is approximately
twice as coarse as the grid spacing at a given level of resolution.