Note, that the project is still under development!
The "HDF5 Research Data Management Toolbox" (h5RDMtoolbox) is a Python package supporting everybody who is working with HDF5 to achieve a sustainable data lifecycle which follows the FAIR (Findable, Accessible, Interoperable, Reusable) principles. It specifically supports the five main steps of planning, collecting, analyzing, sharing and reusing data. Please visit the documentation for detailed information of try the quickstart using colab.
- Combining HDF5 and xarray to allow easy access to metadata and data during analysis and processing ( see here).
- Assigning metadata with "globally unique and persistent identifiers" as required by F1 of the FAIR principles. This can be achieved by using RDF triples, which removes "ambiguity in the meaning of your published data".
- Define standard attributes through conventions and enforce users to use certain attributes in their HDF5 files, such as units and a description, for example.
- Upload HDF5 files directly to repositories like Zenodo or use them with noSQL databases like mongoDB.
For everybody, who is...
- ... looking for a management approach for his or her data.
- ... community has not yet established a stable convention.
- ... working with small and big data, that fits into HDF5 files.
- ... looking for an easy way to work with HDF5, especially through Jupyter Notebooks.
- ... trying to integrate HDF5 with repositories and databases.
- ... wishing to enrich data semantically with the RDF standard.
- ... looking for a way to do all the above whiles not needing to learn a new syntax.
- ... new to HDF5 and wants to learn about it, especially with respect to the FAIR principles and data management.
For everybody, who ...
- ... is looking for a management approach which at the same time allows high-performance and/or parallel work with HDF5
- ... has already well-established conventions and managements approaches in his or her community
The toolbox implements five modules, which are shown below. The numbers reference to their main usage in the stages in the data lifecycle above. Except the wrapper module, which uses the convention module, all other modules are independent of each other.
Current implementation highlights in the modules:
- The wrapper module adds functionality on top of the
h5py
package. It allows to include so-called standard names, which are defined in conventions. And it implements interfaces, such as to the packagexarray
, which allows to carry metadata from HDF5 to the user. Other high-level interfaces like.rdf
allows assigning semantic information to the HDF5 file. - For the database module,
hdfDB
andmongoDB
are implemented. ThehdfDB
module allows to use HDF5 files as a database. ThemongoDB
module allows to use mongoDB as a database by mapping the metadata of HDF5 files to the database. - For the repository module, a Zenodo interface is implemented. Zenodo is a repository, which allows to upload and download data with a persistent identifier.
- For the convention module, the standard attributes are implemented.
- The layout module allows to define expectations on the internal layout (object names, location, attributes, properties) of HDF5 files.
A quickstart notebook can be tested by clicking on the following badge:
Please find a comprehensive documentation with many examples here or by click on the image, which shows the research data lifecycle in the center and the respective toolbox features on the outside:
A paper is published in the journal inggrid.
Use python 3.8 or higher (automatic testing is performed until 3.12). If you are a regular user, you can install the package via pip:
pip install h5RDMtoolbox
Developers may clone the repository and install the package from source. Clone the repository first:
git clone https://github.com/matthiasprobst/h5RDMtoolbox.git@main
Then, run
pip install h5RDMtoolbox/
Add --user
if you do not have root access.
For development installation run
pip install -e h5RDMtoolbox/
The core functionality depends on the following packages. Some of them are for general management others are very specific to the features of the package:
General dependencies are ...
numpy>=1.20
: Scientific computing, handling of arraysmatplotlib>=3.5.2
: Plottingappdirs>=1.4.4
: Managing user and application directoriespackaging
: Version handlingIPython>=8.4.0
: Pretty display of data in notebooksregex>=2020.7.9
: Working with regular expressions
Specific to the package are ...
h5py=3.7.0
: HDF5 file interfacexarray>=2022.3.0
: Working with scientific arrays in combination with attributes. Allows carrying metadata from HDF5 to userpint>=0.19.2
: Allows working with unitspint_xarray>=0.2.1
: Working with units for usage with xarraypython-forge==18.6.0
: Used to update function signatures when using the standard attributespydantic
: Used to validate standard attributespyyaml>6.0.0
: Reading and writing of yaml files, e.g. metadata definitions (conventions). Note, lower versions collide with python 3.11requests
: Used to download files from the internet or validate URLs, e.g. metadata definitions (conventions)rdflib
: Used to enable working with RDFontolutils
: Required to work with RDF and derive semantic description of HDF5 file content
To run unit tests or to enable certain features, additional dependencies must be installed.
Install optional dependencies by specifying them in square brackets after the package name, e.g.:
pip install h5RDMtoolbox[mongodb]
[mongodb]
pymongo>=4.2.0
: Database solution for HDF5 files
[csv]
pandas>=1.4.3
: Mainly used for reading csv and pretty printing
[snt]
xmltodict
: Reading of xml filestabulate>=0.8.10
: Pretty printing of tablespython-gitlab
: Access to gitlab repositoriespypandoc>=2.3
: Conversion of markdown files to html
If you intend to use the package in your work, you may cite the software itself as published on paper in the Zenodo repository. A related paper is published in the journal inggrid. Thank you!
Here's the bibtext to it:
@article{probst2024h5rdmtoolbox,
author = {Matthias Probst, Balazs Pritz},
title = {h5RDMtoolbox - A Python Toolbox for FAIR Data Management around HDF5},
volume = {2},
year = {2024},
url = {https://www.inggrid.org/article/id/4028/},
issue = {1},
doi = {10.48694/inggrid.4028},
month = {8},
keywords = {Data management,HDF5,metadata,data lifecycle,Python,database},
issn = {2941-1300},
publisher={Universitäts- und Landesbibliothek Darmstadt},
journal = {ing.grid}
}
Feel free to contribute. Make sure to write docstrings
to your methods and classes and please write tests and use PEP
8 (https://peps.python.org/pep-0008/)
Please write tests for your code and put them into the test/
folder. Visit the README file in the
test-folder for more information.
Pleas also add a jupyter notebook in the docs/
folder in order to document your code. Please visit
the README file in the docs-folder for more information on how to compile the documentation.
Please use the numpy style for the docstrings: https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_numpy.html#example-numpy