-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a decoding mask option to only include subset of grid nodes in m2g #34
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Agree that this is required. Testing for "less than" seems sufficient 👍
More general comment: I am slightly worried about the mask being generated in mllam-data-prep
and then being used here and in neural-lam
. We probably need some solid checks in neural-lam
about the dimensions of the graph, the data-tensors and the mask.
Good suggestions all around @SimonKamuk ! Was there something specific introduced by this PR that you think should be added to the readme? I was thinking that the example in the readme can continue to use default parameters, which is Otherwise this is the plan for this PR now:
EDIT: Given later changes we might want to take another look over this before merging. |
Well, it's not strictly necessary, and I think it's actually better suited for the documentation notebooks rather than the readme, but I just thought it would be nice to describe this feature with an example to make it easier for people to find. But again, it might not be necessary 😄 |
I do agree though that it could be nice to have a documentation notebook with an example of using this. Will see if I can put together something small. |
Co-authored-by: Leif Denby <leif@denby.eu>
…args in archetypes
There is one small problem currently, which is one reason why a6b6137 was introduced. When using the decoding mask (maybe this could happen also without it, not sure?), not all mesh nodes might be connected to the grid in m2g, i.e. there are mesh nodes that are not endpoints of any edges in m2g. This is desirable. However, when you split up a graph using
only nodes that are endpoint of any edge will be included. Node subsetting is only implicit through the edge sub-graphing: weather-model-graphs/src/weather_model_graphs/networkx_utils.py Lines 80 to 88 in a6d43e3
This means that something like weather-model-graphs/tests/test_save.py Line 30 in a6d43e3
This is why the option to return separate components was introduced, since it is unnecessary to merge everything and then split it up again (especially as splitting now breaks the node indices). Do you have any thoughts about good way to fix this @leifdenby ? Or do we want to fix it? Not sure if if has to be fixed in this PR, since in a sense it's a more general issue with the edge-attribute-based sub-graphing. |
Now all changes from #32 is merged into main and here so this diff is readable. |
Ok, that sounds like an issue. I don't fully understand why yet, but I believe you. So rather than merging to create a graph that represents the whole encode-process-decode graph of nodes and edges, you introduced
The reason why I implemented the graph creation by including a merging to one big graph is so that each node to could be given a globally (as in across the whole graph) unique ID. I then assumed that this id could then be used for constructing the adjacency matrices for different components (g2m, m2m, m2g say) which are then saved to individual files. Are you saying that this of having a global index doesn't work when masking out which grid-nodes are used in the I might have to create a little toy example to understand the issue properly |
Maybe if you have time you could add a test/notebook @joeloskarsson with an assert that checks for the indexing that you expect? I assume it is the I tried creating a notebook just now, but I am not quite sure about what the correct answer would be... |
Yes, the issue does appear when you split the combined graph. But the problem is not that the indices are wrong, it is that you lose some nodes (which make the indices wrong on those remaining nodes). So your toy example could be:
|
I have dug a bit deeper @joeloskarsson and I have realised where the issue is. I was under the impression that What I still don't quite understand is what neural-lam assumes about the grid-index values for different parts of the whole encode-process-decode graph. I think (based on the fact that we're having this discussion) that I have added a notebook with my experimentation here: https://github.com/leifdenby/weather-model-graphs/blob/decoding_mask/docs/decoding_mask.ipynb, and a started on a test for the decoding mask too: https://github.com/leifdenby/weather-model-graphs/blob/decoding_mask/tests/test_graph_decode_gridpoints_mask.py#L17 |
Yes, this is exactly what I also realized. I wish I would have had a bit more time to write out an explanation for this and I could have saved you the work 😓 Well well, I hope you feel like you gained some insights on the way. In my view the arbitraryness of indexing when converting networkx tto pyg feels really bad and not very thought through. But I guess this is all just because networkx doesn't really have a notion of an integer node index (closer to how we think of a graph in theory, with sets, bad for practical work imo 😛).
I am not sure if that is actually necessary, but we could do that. My implementation using this in neural-lam makes the hard decision that the nodes that you want to decode to have the first indices, follows by the nodes only used in g2m. This does mean that it doesn't matter if you include the masked out node indices in your m2g graph (you'll get the same edge_index tensor in the end anyway). On a more conceptual level, one could either argue that m2g should only index into the set of nodes we decode to, or argue that m2g should index into the set of all grid nodes, but only contain edges to the unmasked ones. Not obvious to me what makes more sense.
Yea, there is quite a lot to this. I have made very minimal assumptions wrt node indexing in neural-lam. Given that I know that we have some different perspectives on the graph (I have this perspective of separate graphs, rather than one joined), I feel like it is something that should be explained better. Maybe I could try to write up some explanation for what neural-lam expects (in that is mllam/neural-lam#93) when I find some time, or we could sit down and talk it through. |
Describe your changes
For LAM forecasting it is reasonable to not always predict values in nodes that are considered part of the boundary forcing. This is in particular a change planned for neural-lam. When we consider the graphs involved, this means that the g2m edges should only connect to a subset of the grid nodes.
This PR introduces an option
decode_mask
that allows for specifying an Iterable of booleans (e.g. a numpy-array) specifying which of the grid nodes should be included in the decoding-part of the graph (m2g). This allows in the LAM case to specify such a mask with True for the inner region nodes.This builds on #32, which should be merged first. Here is a diff for only this PR in the meantime: joeloskarsson/weather-model-graphs@general_coordinates...decoding_mask
Type of change
Checklist before requesting a review
pull
with--rebase
option if possible).Checklist for reviewers
Each PR comes with its own improvements and flaws. The reviewer should check the following:
Author checklist after completed review
reflecting type of change (add section where missing):
Checklist for assignee