A microservice for multi-source consumption of NWP data, storing it in a common format. Built with inspiration
from the Hexagonal Architecture pattern, the nwp-consumer is
currently packaged with adapters for pulling and converting .grib
data from:
- MetOffice Atmospheric API
- CEDA Atmospheric Archive
- ECMWF MARS API
- DWD's ICON Model from the Opendata API
- CMC's GDPS Model from the Opendata API
- NOAA's GFS Model from AWS Open Data
- NOAA's GFS Model from NCAR's Archive
Similarly, the service can write to multiple sinks:
- Local filesystem
- AWS S3
- HuggingFace Datasets
Its modular nature enables straightforward extension to alternate future sources.
The service uses environment variables to configure sources and sinks in accordance with
the Twelve-Factor App methodology.
The program will inform you of missing env vars when using an adaptor, but you can also check the
config for the given module, or use the env
command.
This service is designed to be run as a Docker container. The Containerfile
is the Dockerfile for the service.
It is recommended to run it this way due to the dependency on external non-python binaries, which at the moment
cannot be easily distributed in a PyPi package. To run, pull the latest version from ghcr.io
via:
$ docker run \
-v /path/to/datadir:/data \
-e ENV_VAR=<value> \
ghcr.io/openclimatefix/nwp-consumer:latest <command...>
Ensure the external dependencies are installed. Then, do one of the following:
Either
- Install from PyPI with
$ pip install nwp-consumer
or
- Clone the repository and install the package via
$ git clone git@github.com:openclimatefix/nwp-consumer.git $ cd nwp-consumer $ pip install .
Then run the service via
$ ENV_VAR=<value> nwp-consumer <command...>
Whether running via Docker or the Python package, available commands can be found with the command help
or the
--help
flag. For example:
$ nwp-consumer --help
# or
$ docker run ghcr.io/openclimatefix/nwp-consumer:latest --help
The following terms are used throughout the codebase and documentation. They are defined here to avoid ambiguity.
-
InitTime - The time at which a forecast is initialised. For example, a forecast initialised at 12:00 on 1st January.
-
TargetTime - The time at which a predicted value is valid. For example, a forecast with InitTime 12:00 on 1st January predicts that the temperature at TargetTime 12:00 on 2nd January at position x will be 10 degrees.
Produced using exa:
$ exa --tree --git-ignore -F -I "*init*|test*.*"
./
├── Containerfile # The Dockerfile for the service
├── pyproject.toml # The build configuration for the service
├── README.md
└── src/
├── nwp_consumer/ # The main library package
│ ├── cmd/
│ │ └── main.py # The entrypoint to the service
│ └── internal/ # Packages internal to the service. Like the 'lib' folder
│ ├── config/
│ │ └── config.py # Contains the configuration specification for running the service
│ ├── inputs/ # Holds subpackages for each incoming data source
│ │ ├── ceda/
│ │ │ ├── _models.py
│ │ │ ├── client.py # Contains the client and functions to map CEDA data to the service model
│ │ │ └── README.md # Info about the CEDA data source
│ │ └── metoffice/
│ │ ├── _models.py
│ │ ├── client.py # # Contains the client and functions to map MetOffice data to the service model
│ │ └── README.md # Info about the MetOffice data source
│ ├── models.py # Describes the internal data models for the service
│ ├── outputs/ # Holds subpackages for each data sink
│ │ ├── localfs/
│ │ │ └── client.py # Contains the client for storing data on the local filesystem
│ │ └── s3/
│ │ └── client.py # Contains the client for storing data on S3
│ └── service/ # Contains the business logic and use-cases of the application
│ └── service.py # Defines the service class for the application, whose methods are the use-cases
└── test_integration/
nwp-consumer
is structured following principles from the hexagonal architecture pattern. In brief, this means a clear
separation between the application's business logic - it's Core - and the Actors that are external to it. In
this package, the core of the service is in internal/service/
and the actors are in internal/inputs/
and
internal/outputs/
. The service logic has no knowledge of the external actors, instead defining interfaces that the
actors must implement. These are found in internal/models.py
. The actors are then responsible for implementing these
interfaces, and are dependency-injected in at runtime. This allows the service to be easily tested and extended. See
further reading for more information.
Clone the repository, and create and activate a new python virtualenv for it. cd
to the repository root.
Install the External and Python dependencies as shown in the sections below.
This repository bundles often used commands into a taskfile for convenience. To use these commands, ensure go-task is installed, easily done via homebrew.
You can then see the available tasks using
$ task -l
The cfgrib
python library depends on the ECMWF cfgrib binary, which is a wrapper around the ECMWF ecCodes library.
One of these must be installed on the system and accessible as a shared library.
On a MacOS with HomeBrew use
$ brew install eccodes
Or if you manage binary packages with Conda use
$ conda install -c conda-forge cfgrib
As an alternative you may install the official source distribution by following the instructions at https://confluence.ecmwf.int/display/ECC/ecCodes+installation
You may run a simple selfcheck command to ensure that your system is set up correctly:
$ python -m <eccodes OR cfgrib> selfcheck
Found: ecCodes v2.27.0.
Your system is ready.
Install the required python dependencies and make it editable with
$ pip install -e .
or use the taskfile
$ task install
This looks for requirements specified in the pyproject.toml
file.
Note that these are the bare dependencies for running the application. If you want to run tests, you need the development dependencies as well, which can be installed via
$ pip install -e ".[dev]"
or
$ task install-dev
Where is the requirements.txt file?
There is no requirements.txt
file. Instead, the project uses setuptool's pyproject.toml integration to specify
dependencies. This is a new feature of setuptools and pip, and is the
recommended way to specify dependencies.
See the setuptools guide and
the PEP621 specification
for more information, as well as Further Reading.
Ensure you have installed the Python requirements and the External dependencies.
Run the unit tests with
$ python -m unittest discover -s src/nwp_consumer -p "test_*.py"
or
$ task test-unit
and the integration tests with
$ python -m unittest discover -s test_integration -p "test_*.py"
or
$ task test-integration
See further reading for more information on the src
directory structure.
On packaging a python project using setuptools and pyproject.toml:
- The official PyPA packaging guide.
- A step-by-step practical guide on the godatadriven blog.
- The pyproject.toml metadata specification.
On hexagonal architecture:
- A concrete example using Python.
- An overview of the fundamentals incorporating Typescript
- Another example using Go.
On the directory structure:
- The official PyPA discussion on src and flat layouts.
- See the OCF Organisation Repo for details on contributing.
- Find out more about OCF in the Meta Repo.
- Follow OCF on Twitter.
- Check out the OCF blog at https://openclimatefix.org/blog for updates.