Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into feature_rank_one_utils
Browse files Browse the repository at this point in the history
  • Loading branch information
sadamov committed May 25, 2024
2 parents 57a396c + 5b71be3 commit ec796cb
Show file tree
Hide file tree
Showing 20 changed files with 378 additions and 282 deletions.
3 changes: 3 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[flake8]
max-line-length = 88
ignore = E203, F811, I002, W503
36 changes: 14 additions & 22 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
@@ -1,33 +1,25 @@
name: Run pre-commit job
name: lint

on:
push:
# trigger on pushes to any branch, but not main
push:
branches-ignore:
- main
# and also on PRs to main
pull_request:
branches:
- main
pull_request:
branches:
- main
- main

jobs:
pre-commit-job:
pre-commit-job:
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install pre-commit hooks
run: |
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 \
--index-url https://download.pytorch.org/whl/cpu
pip install -r requirements.txt
pip install pyg-lib==0.2.0 torch-scatter==2.1.1 torch-sparse==0.6.17 \
torch-cluster==1.6.1 torch-geometric==2.3.1 \
-f https://pytorch-geometric.com/whl/torch-2.0.1+cpu.html
- name: Run pre-commit hooks
run: |
pre-commit run --all-files
python-version: ${{ matrix.python-version }}
- uses: pre-commit/action@v2.0.3
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ graphs
*.sif
sweeps
test_*.sh
.vscode

### Python ###
# Byte-compiled / optimized / DLL files
Expand Down
66 changes: 26 additions & 40 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,51 +1,37 @@
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.5.0
hooks:
- id: check-ast
- id: check-case-conflict
- id: check-docstring-first
- id: check-symlinks
- id: check-toml
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace
- repo: local
- id: check-ast
- id: check-case-conflict
- id: check-docstring-first
- id: check-symlinks
- id: check-toml
- id: check-yaml
- id: debug-statements
- id: end-of-file-fixer
- id: trailing-whitespace

- repo: https://github.com/codespell-project/codespell
rev: v2.2.6
hooks:
- id: codespell
name: codespell
- id: codespell
description: Check for spelling errors
language: system
entry: codespell
- repo: local

- repo: https://github.com/psf/black
rev: 22.3.0
hooks:
- id: black
name: black
- id: black
description: Format Python code
language: system
entry: black
types_or: [python, pyi]
- repo: local

- repo: https://github.com/PyCQA/isort
rev: 5.12.0
hooks:
- id: isort
name: isort
- id: isort
description: Group and sort Python imports
language: system
entry: isort
types_or: [python, pyi, cython]
- repo: local

- repo: https://github.com/PyCQA/flake8
rev: 7.0.0
hooks:
- id: flake8
name: flake8
- id: flake8
description: Check Python code for correctness, consistency and adherence to best practices
language: system
entry: flake8 --max-line-length=80 --ignore=E203,F811,I002,W503
types: [python]
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint -rn -sn
language: system
types: [python]
72 changes: 72 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [unreleased](https://github.com/joeloskarsson/neural-lam/compare/v0.1.0...HEAD)

### Added

- Replaced `constants.py` with `data_config.yaml` for data configuration management
[\#31](https://github.com/joeloskarsson/neural-lam/pull/31)
@sadamov

- new metrics (`nll` and `crps_gauss`) and `metrics` submodule, stddiv output option
[c14b6b4](https://github.com/joeloskarsson/neural-lam/commit/c14b6b4323e6b56f1f18632b6ca8b0d65c3ce36a)
@joeloskarsson

- ability to "watch" metrics and log
[c14b6b4](https://github.com/joeloskarsson/neural-lam/commit/c14b6b4323e6b56f1f18632b6ca8b0d65c3ce36a)
@joeloskarsson

- pre-commit setup for linting and formatting
[\#6](https://github.com/joeloskarsson/neural-lam/pull/6), [\#8](https://github.com/joeloskarsson/neural-lam/pull/8)
@sadamov, @joeloskarsson

### Changed

- Updated scripts and modules to use `data_config.yaml` instead of `constants.py`
[\#31](https://github.com/joeloskarsson/neural-lam/pull/31)
@sadamov

- Added new flags in `train_model.py` for configuration previously in `constants.py`
[\#31](https://github.com/joeloskarsson/neural-lam/pull/31)
@sadamov

- moved batch-static features ("water cover") into forcing component return by `WeatherDataset`
[\#13](https://github.com/joeloskarsson/neural-lam/pull/13)
@joeloskarsson

- change validation metric from `mae` to `rmse`
[c14b6b4](https://github.com/joeloskarsson/neural-lam/commit/c14b6b4323e6b56f1f18632b6ca8b0d65c3ce36a)
@joeloskarsson

- change RMSE definition to compute sqrt after all averaging
[\#10](https://github.com/joeloskarsson/neural-lam/pull/10)
@joeloskarsson

### Removed

- `WeatherDataset(torch.Dataset)` no longer returns "batch-static" component of
training item (only `prev_state`, `target_state` and `forcing`), the batch static features are
instead included in forcing
[\#13](https://github.com/joeloskarsson/neural-lam/pull/13)
@joeloskarsson

### Maintenance

- simplify pre-commit setup by 1) reducing linting to only cover static
analysis excluding imports from external dependencies (this will be handled
in build/test cicd action introduced later), 2) pinning versions of linting
tools in pre-commit config (and remove from `requirements.txt`) and 3) using
github action to run pre-commit.
[\#29](https://github.com/mllam/neural-lam/pull/29)
@leifdenby


## [v0.1.0](https://github.com/joeloskarsson/neural-lam/releases/tag/v0.1.0)

First tagged release of `neural-lam`, matching Oskarsson et al 2023 publication
(<https://arxiv.org/abs/2309.17370>)
5 changes: 2 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Still, some restrictions are inevitable:
## A note on the limited area setting
Currently we are using these models on a limited area covering the Nordic region, the so called MEPS area (see [paper](https://arxiv.org/abs/2309.17370)).
There are still some parts of the code that is quite specific for the MEPS area use case.
This is in particular true for the mesh graph creation (`create_mesh.py`) and some of the constants used (`neural_lam/constants.py`).
This is in particular true for the mesh graph creation (`create_mesh.py`) and some of the constants set in a `data_config.yaml` file (path specified in `train_model.py --data_config` ).
If there is interest to use Neural-LAM for other areas it is not a substantial undertaking to refactor the code to be fully area-agnostic.
We would be happy to support such enhancements.
See the issues https://github.com/joeloskarsson/neural-lam/issues/2, https://github.com/joeloskarsson/neural-lam/issues/3 and https://github.com/joeloskarsson/neural-lam/issues/4 for some initial ideas on how this could be done.
Expand Down Expand Up @@ -104,13 +104,12 @@ The graph-related files are stored in a directory called `graphs`.

### Create remaining static features
To create the remaining static files run the scripts `create_grid_features.py` and `create_parameter_weights.py`.
The main option to set for these is just which dataset to use.

## Weights & Biases Integration
The project is fully integrated with [Weights & Biases](https://www.wandb.ai/) (W&B) for logging and visualization, but can just as easily be used without it.
When W&B is used, training configuration, training/test statistics and plots are sent to the W&B servers and made available in an interactive web interface.
If W&B is turned off, logging instead saves everything locally to a directory like `wandb/dryrun...`.
The W&B project name is set to `neural-lam`, but this can be changed in `neural_lam/constants.py`.
The W&B project name is set to `neural-lam`, but this can be changed in the flags of `train_model.py` (using argsparse).
See the [W&B documentation](https://docs.wandb.ai/) for details.

If you would like to login and use W&B, run:
Expand Down
12 changes: 8 additions & 4 deletions create_grid_features.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,25 @@
import numpy as np
import torch

# First-party
from neural_lam import config


def main():
"""
Pre-compute all static features related to the grid nodes
"""
parser = ArgumentParser(description="Training arguments")
parser.add_argument(
"--dataset",
"--data_config",
type=str,
default="meps_example",
help="Dataset to compute weights for (default: meps_example)",
default="neural_lam/data_config.yaml",
help="Path to data config file (default: neural_lam/data_config.yaml)",
)
args = parser.parse_args()
config_loader = config.Config.from_file(args.data_config)

static_dir_path = os.path.join("data", args.dataset, "static")
static_dir_path = os.path.join("data", config_loader.dataset.name, "static")

# -- Static grid node features --
grid_xy = torch.tensor(
Expand Down
13 changes: 8 additions & 5 deletions create_mesh.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@
import torch_geometric as pyg
from torch_geometric.utils.convert import from_networkx

# First-party
from neural_lam import config


def plot_graph(graph, title=None):
fig, axis = plt.subplots(figsize=(8, 8), dpi=200) # W,H
Expand Down Expand Up @@ -153,11 +156,10 @@ def prepend_node_index(graph, new_index):
def main():
parser = ArgumentParser(description="Graph generation arguments")
parser.add_argument(
"--dataset",
"--data_config",
type=str,
default="meps_example",
help="Dataset to load grid point coordinates from "
"(default: meps_example)",
default="neural_lam/data_config.yaml",
help="Path to data config file (default: neural_lam/data_config.yaml)",
)
parser.add_argument(
"--graph",
Expand Down Expand Up @@ -187,7 +189,8 @@ def main():
args = parser.parse_args()

# Load grid positions
static_dir_path = os.path.join("data", args.dataset, "static")
config_loader = config.Config.from_file(args.data_config)
static_dir_path = os.path.join("data", config_loader.dataset.name, "static")
graph_dir_path = os.path.join("graphs", args.graph)
os.makedirs(graph_dir_path, exist_ok=True)

Expand Down
20 changes: 12 additions & 8 deletions create_parameter_weights.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
from tqdm import tqdm

# First-party
from neural_lam import constants
from neural_lam import config
from neural_lam.weather_dataset import WeatherDataset


Expand All @@ -18,10 +18,10 @@ def main():
"""
parser = ArgumentParser(description="Training arguments")
parser.add_argument(
"--dataset",
"--data_config",
type=str,
default="meps_example",
help="Dataset to compute weights for (default: meps_example)",
default="neural_lam/data_config.yaml",
help="Path to data config file (default: neural_lam/data_config.yaml)",
)
parser.add_argument(
"--batch_size",
Expand All @@ -43,7 +43,8 @@ def main():
)
args = parser.parse_args()

static_dir_path = os.path.join("data", args.dataset, "static")
config_loader = config.Config.from_file(args.data_config)
static_dir_path = os.path.join("data", config_loader.dataset.name, "static")

# Create parameter weights based on height
# based on fig A.1 in graph cast paper
Expand All @@ -56,7 +57,10 @@ def main():
"500": 0.03,
}
w_list = np.array(
[w_dict[par.split("_")[-2]] for par in constants.PARAM_NAMES]
[
w_dict[par.split("_")[-2]]
for par in config_loader.dataset.var_longnames
]
)
print("Saving parameter weights...")
np.save(
Expand All @@ -66,7 +70,7 @@ def main():

# Load dataset without any subsampling
ds = WeatherDataset(
args.dataset,
config_loader.dataset.name,
split="train",
subsample_step=1,
pred_length=63,
Expand Down Expand Up @@ -113,7 +117,7 @@ def main():
# Compute mean and std.-dev. of one-step differences across the dataset
print("Computing mean and std.-dev. for one-step differences...")
ds_standard = WeatherDataset(
args.dataset,
config_loader.dataset.name,
split="train",
subsample_step=1,
pred_length=63,
Expand Down
Loading

0 comments on commit ec796cb

Please sign in to comment.