Read, analyze and visualize SMLM localization data quickly from a wide range of localization algorithms.
Example rendering of EPFL challenge dataset 'MT0.N1.HD'
See demo.ipynb for example usage.
- vtk
- numpy
- jupyter
- pandas
- seaborn
- requests
- scipy
See requirements.yml for Conda, piprequirements.txt for pip
- EPFL Challenge:
- epflreader.EPFLReader('data.csv')
- source
- Leica GSD
- gsd.GSDReader('test.bin') # with test.desc in same folder,
- gsdreader.GSDReader('test.ascii', preprocess=True, binary=False) # ASCII format
- (needs pixel to nm conversion (e.g. *160 nm /px) : obj.points *= X
- source
- Tafteh et al dSTORM with z-drift correction (LSI - UBC)
- dlpreader.DlpReader('test.3dlp')
- source
- Rainstorm
- db = rainstormreader.RainStormReader('data.csv') # Automatically finds pixel to nm
- source
- Abbelight
- ab = abbelightreader.AbbelightReader('data.csv') # in Nm
- source
$pip install smlmvis
$conda conda install -c bcardoen smlmvis
$git clone git@github.com:bencardoen/smlmvis.git
$pip install .
You may want to install optional dependencies, e.g. jupyter notebook and seaborn for the demo:
pip install jupyter seaborn
See demo.ipynb for example usage.
A typical workflow is
- use one of the readers (e.g. GSDReader in smlmvis.gsreader) to load in the SMLM data
- process the point cloud (obj.points) or compute statistics on the metadata (obj.values)
- write out the data to vtk/paraview format using e.g. VtuWriter in vtuwriter
@misc{Cardoen2019,
author = {Cardoen, Ben},
title = {Superresolution visualization of 3D protein localization data from a range of microscopes},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = 189660035
howpublished = {\url{https://github.com/bencardoen/smlmvis/}}
}
See tests/test_writer.py
This will download the challenge data set, read it, decode it, write it to VTK and compare with a reference.
VTU writing code uses the VTK examples heavily to figure out how to interface with VTK.