Cross-correlation beamforming can be realised in a few lines of matrix operations in Python, making use of pytorch
linear algebra optimisations for speed. For small problems (=smaller than memory), this is fast, efficient and fully parallel. For large problems, this approach fails, specifically when the matrices containing cross correlations become too large for memory. To solve this, we can employ dask
to divide the computations automatically into multiple tasks than can run across arbitrary infrastructure while retaining much of the same syntax and logic.
We demonstrate this with the notebooks in this repository:
beamforming_naive.ipynb
: naive beamforming code, SLOW "pure" Python version for teaching purposes onlybeamforming_numpy.ipynb
: same as above, rewritten innumpy
using broadcasting etc.beamforming_pytorch.ipynb
: same as above, replacingnumpy
functions withpytorch
equivalentsbeamforming_dask.ipynb
: same as above, moving memory-limited computations todask
.
Other variants:
beamforming_geo.ipynb
: same as thepytorch
version, but using geographical coordinates. requires geokernels for distance calculations.beamforming_planewave_data.ipynb
: plane-wave beamforming of seismic field data. requires obspy for handling seismograms.
In these notebooks, logic and processing is not abstracted away in a package of functions. Instead, all processing happens within the notebooks for instructional purposes.
These are the runtimes of the cell that performs beamforming (under 3. Beamforming) on a machine with 2x Intel Xeon Gold 6326 (16C/32T), 512 GB RAM for the parameters indicated below.
Is your code faster? Let me know!
notebook version | runtime | speed-up |
---|---|---|
naive |
43.9 sec | 1x |
numpy |
11.7 sec | 3.75x |
pytorch |
0.9 sec | 48.8x |
dask |
1.9 sec | 23.1x |
notebook version | runtime | speed-up |
---|---|---|
naive |
4861.3 sec | 1x |
numpy |
fail | fail |
pytorch |
fail | fail |
dask |
47.0 sec | 103.4x |
numpy
and pytorch
versions fail, because S
would require 2.1TiB
of memory.
Other parameters in both tests: grid_limit = 100
, grid_spacing = 5
, window_length = 100
, sampling_rate = 10
, fmin, fmax = 0.1, 1.0
This repository is also intended as a case study to teach students and researchers about the potential of a) making best use of the already exisiting computing libraries for significant speed-up compared to naively written Python code and b) how much performance can be gained simply by moving from numpy
to equivalent pytorch
code. Note that pytorch
is significantly faster in the example of cross-correlation beamforming, because large tensors are involved. Further note that all linear equation systems, no matter what physics they express in your specific context, can be coded in matrix formulations, allowing to exploit the linear algebra optimisations developed in the machine learning community for your research.
dask
allows to employ the same algorithm and largely the same syntax as the pytorch
version, which means one doesn't have to worry about developing a different algorithm that is not memory-limited. However, dask
also introduces a new optimisation problem: The choice of "good" chunks sizes for the specific system at hand. This is specific to the compute infrastructure used. On the bright side, this would need to be optimized only once for a given problem-geometry (number of stations, grid points, frequencies). Even without randomly chosen chunksizes (100, 100, 100 in the notebook here), performance is good. Visit the dask documentation for more details.
Beamforming is a phase-matching algorithm commonly used to estimate the origin and local phase velocity of a wavefront propagating across an array of sensors. The most basic beamformer is the delay-and-sum beamformer, where recordings across the sensors are phase-shifted and summed (forming the beam) to test for the best-fitting source origin and medium velocity (Rost and Thomas, 2002).
The cross-correlation beamformer (also Bartlett beamformer, conventional beamformer, etc.) applies the same delay-and-sum idea to correlation functions between all sensor pairs (Ruigrok et al. 2017). This has the major advantage that only the coherent part of the wavefield is taken into account. The major disadvantage is that the computation of cross correlations between all station pairs can become expensive fast, scaling with
A few different formulation of this beamformer exist. We write it in frequency domain as
with
The synthetic signals
In seismology, "beamforming" is often synonymous with plane-wave beamforming. In plane-wave beamforming
with
When curved wavefronts are allowed, sources may be located within the sensor array and the grid that is tested is defined in space instead of the slowness-domain, adding at least one extra dimension. This is called matched field processing (e.g., Baggeroer et al. 1988). In practice, the difference between plane-wave beamforming and matched field processing lies in the computation of the Green's functions
In MFP, the travel time is computed as
with
The beamforming in the notebooks here is Matched Field Processing.
Rost, S. & Thomas, C., 2002. Array seismology: Methods and applications. Reviews of Geophysics, 40, 2–1–2–27. doi:10.1029/2000RG000100
Ruigrok, E., Gibbons, S. & Wapenaar, K., 2017. Cross-correlation beamforming. J Seismol, 21, 495–508. doi:10.1007/s10950-016-9612-6
Baggeroer, A.B., Kuperman, W.A. & Schmidt, H., 1988. Matched field processing: Source localization in correlated noise as an optimum parameter estimation problem. The Journal of the Acoustical Society of America, 83, 571–587. doi:10.1121/1.396151
To run these notebooks, the following is required
- Python
- scientific Python stack (numpy, scipy, matplotlib)
- notebook
- torch
- dask
- geokernels for distances on geographical grids
A functioning installation can be achieved, e.g., via conda by
>> conda create -n fast_beamforming python=3.11
>> conda activate fast_beamforming
>> conda install pytorch dask scipy matplotlib notebook