Skip to content

Commit

Permalink
cherry-pick with main
Browse files Browse the repository at this point in the history
  • Loading branch information
sadamov committed May 28, 2024
1 parent af7751a commit 5f538f9
Show file tree
Hide file tree
Showing 11 changed files with 329 additions and 282 deletions.
36 changes: 14 additions & 22 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -1,33 +1,25 @@
name: Run pre-commit job
name: lint

on:
push:
# trigger on pushes to any branch, but not main
push:
branches-ignore:
- main
# and also on PRs to main
pull_request:
branches:
- main
pull_request:
branches:
- main
- main

jobs:
pre-commit-job:
pre-commit-job:
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install pre-commit hooks
run: |
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 \
--index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
pip install pyg-lib==0.2.0 torch-scatter==2.1.1 torch-sparse==0.6.17 \
torch-cluster==1.6.1 torch-geometric==2.3.1 \
-f https://pytorch-geometric.com/whl/torch-2.0.1+cpu.html
- name: Run pre-commit hooks
run: |
pre-commit run --all-files
python-version: ${{ matrix.python-version }}
- uses: pre-commit/action@v2.0.3
66 changes: 26 additions & 40 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,51 +1,37 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-ast
- id: check-case-conflict
- id: check-docstring-first
- id: check-symlinks
- id: check-toml
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: local
- id: check-ast
- id: check-case-conflict
- id: check-docstring-first
- id: check-symlinks
- id: check-toml
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
name: codespell
- id: codespell
description: Check for spelling errors
language: system
entry: codespell
- repo: local

- repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
name: black
- id: black
description: Format Python code
language: system
entry: black
types_or: [python, pyi]
- repo: local

- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
name: isort
- id: isort
description: Group and sort Python imports
language: system
entry: isort
types_or: [python, pyi, cython]
- repo: local

- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
hooks:
- id: flake8
name: flake8
- id: flake8
description: Check Python code for correctness, consistency and adherence to best practices
language: system
entry: flake8 --max-line-length=80 --ignore=E203,F811,I002,W503
types: [python]
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint -rn -sn
language: system
types: [python]
72 changes: 72 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [unreleased](https://github.com/joeloskarsson/neural-lam/compare/v0.1.0...HEAD)

### Added

- Replaced `constants.py` with `data_config.yaml` for data configuration management
[\#31](https://github.com/joeloskarsson/neural-lam/pull/31)
@sadamov

- new metrics (`nll` and `crps_gauss`) and `metrics` submodule, stddiv output option
[c14b6b4](https://github.com/joeloskarsson/neural-lam/commit/c14b6b4323e6b56f1f18632b6ca8b0d65c3ce36a)
@joeloskarsson

- ability to "watch" metrics and log
[c14b6b4](https://github.com/joeloskarsson/neural-lam/commit/c14b6b4323e6b56f1f18632b6ca8b0d65c3ce36a)
@joeloskarsson

- pre-commit setup for linting and formatting
[\#6](https://github.com/joeloskarsson/neural-lam/pull/6), [\#8](https://github.com/joeloskarsson/neural-lam/pull/8)
@sadamov, @joeloskarsson

### Changed

- Updated scripts and modules to use `data_config.yaml` instead of `constants.py`
[\#31](https://github.com/joeloskarsson/neural-lam/pull/31)
@sadamov

- Added new flags in `train_model.py` for configuration previously in `constants.py`
[\#31](https://github.com/joeloskarsson/neural-lam/pull/31)
@sadamov

- moved batch-static features ("water cover") into forcing component return by `WeatherDataset`
[\#13](https://github.com/joeloskarsson/neural-lam/pull/13)
@joeloskarsson

- change validation metric from `mae` to `rmse`
[c14b6b4](https://github.com/joeloskarsson/neural-lam/commit/c14b6b4323e6b56f1f18632b6ca8b0d65c3ce36a)
@joeloskarsson

- change RMSE definition to compute sqrt after all averaging
[\#10](https://github.com/joeloskarsson/neural-lam/pull/10)
@joeloskarsson

### Removed

- `WeatherDataset(torch.Dataset)` no longer returns "batch-static" component of
training item (only `prev_state`, `target_state` and `forcing`), the batch static features are
instead included in forcing
[\#13](https://github.com/joeloskarsson/neural-lam/pull/13)
@joeloskarsson

### Maintenance

- simplify pre-commit setup by 1) reducing linting to only cover static
analysis excluding imports from external dependencies (this will be handled
in build/test cicd action introduced later), 2) pinning versions of linting
tools in pre-commit config (and remove from `requirements.txt`) and 3) using
github action to run pre-commit.
[\#29](https://github.com/mllam/neural-lam/pull/29)
@leifdenby


## [v0.1.0](https://github.com/joeloskarsson/neural-lam/releases/tag/v0.1.0)

First tagged release of `neural-lam`, matching Oskarsson et al 2023 publication
(<https://arxiv.org/abs/2309.17370>)
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Still, some restrictions are inevitable:
## A note on the limited area setting
Currently we are using these models on a limited area covering the Nordic region, the so called MEPS area (see [paper](https://arxiv.org/abs/2309.17370)).
There are still some parts of the code that is quite specific for the MEPS area use case.
This is in particular true for the mesh graph creation (`create_mesh.py`) and some of the constants used (`neural_lam/constants.py`).
This is in particular true for the mesh graph creation (`create_mesh.py`) and some of the constants set in a `data_config.yaml` file (path specified in `train_model.py --data_config` ).
If there is interest to use Neural-LAM for other areas it is not a substantial undertaking to refactor the code to be fully area-agnostic.
We would be happy to support such enhancements.
See the issues https://github.com/joeloskarsson/neural-lam/issues/2, https://github.com/joeloskarsson/neural-lam/issues/3 and https://github.com/joeloskarsson/neural-lam/issues/4 for some initial ideas on how this could be done.
Expand Down Expand Up @@ -104,13 +104,12 @@ The graph-related files are stored in a directory called `graphs`.

### Create remaining static features
To create the remaining static files run the scripts `create_grid_features.py` and `create_parameter_weights.py`.
The main option to set for these is just which dataset to use.

## Weights & Biases Integration
The project is fully integrated with [Weights & Biases](https://www.wandb.ai/) (W&B) for logging and visualization, but can just as easily be used without it.
When W&B is used, training configuration, training/test statistics and plots are sent to the W&B servers and made available in an interactive web interface.
If W&B is turned off, logging instead saves everything locally to a directory like `wandb/dryrun...`.
The W&B project name is set to `neural-lam`, but this can be changed in `neural_lam/constants.py`.
The W&B project name is set to `neural-lam`, but this can be changed in the flags of `train_model.py` (using argsparse).
See the [W&B documentation](https://docs.wandb.ai/) for details.

If you would like to login and use W&B, run:
Expand Down
19 changes: 8 additions & 11 deletions create_mesh.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,7 @@
from torch_geometric.utils.convert import from_networkx

# First-party
from neural_lam import utils

# matplotlib.use('TkAgg')
from neural_lam import config


def plot_graph(graph, title=None):
Expand Down Expand Up @@ -157,6 +155,12 @@ def prepend_node_index(graph, new_index):

def main():
parser = ArgumentParser(description="Graph generation arguments")
parser.add_argument(
"--data_config",
type=str,
default="neural_lam/data_config.yaml",
help="Path to data config file (default: neural_lam/data_config.yaml)",
)
parser.add_argument(
"--graph",
type=str,
Expand All @@ -182,20 +186,13 @@ def main():
default=0,
help="Generate hierarchical mesh graph (default: 0, no)",
)
parser.add_argument(
"--data_config",
type=str,
default="neural_lam/data_config.yaml",
help="Path to data config file (default: neural_lam/data_config.yaml)",
)

args = parser.parse_args()

# Load grid positions
graph_dir_path = os.path.join("graphs", args.graph)
os.makedirs(graph_dir_path, exist_ok=True)

config_loader = utils.ConfigLoader(args.data_config)
config_loader = config.Config(args.data_config)
xy = config_loader.get_nwp_xy()
grid_xy = torch.tensor(xy)
pos_max = torch.max(torch.abs(grid_xy))
Expand Down
Loading

0 comments on commit 5f538f9

Please sign in to comment.