Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate issues associated with .zarr format of new Parcels releases #384

Open
hrsdawson opened this issue Jun 14, 2024 · 7 comments
Open

Comments

@hrsdawson
Copy link
Collaborator

Newer versions of Parcels output trajectory data in .zarr format, rather than .netcdf. In some (all?) cases, this may lead to the creation of many, many files clogging NCI projects on Gadi.

To do:

  • Check the output format of new Parcels releases
  • If no flexibility in output (don't think there is);
    • Add warning to the Particle tracking recipe about the generation of many files
    • Add example to same recipe of how to consolidate Parcels output into fewer files (e.g. using netcdf)
    • Add some guidelines on workflow, e.g. 1) running simulations on scratch, 2) consolidating output, 3) moving consolidated file(s) to gdata, 4) deleting raw .zarr output.
@hrsdawson hrsdawson added this to Unclaimed in COSIMA Hackathon 1.0 via automation Jun 14, 2024
@hrsdawson hrsdawson removed this from Unclaimed in COSIMA Hackathon 1.0 Jun 14, 2024
@anton-seaice
Copy link
Collaborator

The recipe at the moment puts the output in scratch:

dir = ! echo /scratch/$PROJECT/$USER/particle_tracking

and Parcels docs recommend Zarr (https://docs.oceanparcels.org/en/latest/examples/tutorial_output.html#Reading-the-output-file), so maybe we can just add a note about this (i.e. why we are using scratch, warning about lots of files).

@adele-morrison
Copy link
Collaborator

I think what Hannah was getting at is we need to provide an example of how to postprocess the zarr files to reduce the file numbers before they are transferred to gdata. We recently had an example of a relatively small particle tracking project (only thousands of particles) that resulted in >4 milllion files stored on gdata. In that case, each particle had a separate file for each of lon, lat, depth, time etc at EVERY time/position!

I’m not sure if that’s the default for Parcels now, because our notebook example uses an old parcels version that saved in netcdf. So as Hannah says above, first step is to check what the new default does in terms of number of files and what options there are for reducing file numbers if necessary if the default is bad.

@anton-seaice
Copy link
Collaborator

Oh I see! That's definitely problematic.

I’m not sure if that’s the default for Parcels now, because our notebook example uses an old parcels version that saved in netcdf.

Our example is up to date - We moved to Parcels 3 at the end of last year, when 'conda-analysis' moved over to Parcels 3, so the example is using zarr already.

@hrsdawson
Copy link
Collaborator Author

hrsdawson commented Jun 16, 2024

Okay, well in that case maybe we just need to:

  1. Do what @anton-seaice suggested and add a more explicit warning. Maybe in/before cell 4(?). Because although it's using scratch, it doesn't provide a reasoning as to why and does say "change to any directory you would like" - oops, probably my bad when this example was first created.
  2. Provide a link to the Parcels example for consolidating files and provide a step-through example in the recipe of how to do this, before moving trajectory data to gdata?

@hrsdawson
Copy link
Collaborator Author

@anton-seaice is there anything else you think would be worth updating too?

@anton-seaice
Copy link
Collaborator

Sounds good - its also possible that changing the

outputdt – Interval which dictates the update frequency of file output

argument in the instance of ParticleFile (https://docs.oceanparcels.org/en/latest/reference/particlefile.html#module-parcels.particlefile) would reduce the number of files produced, but would need some experimentation.

2. Provide a link to the Parcels example for consolidating files and provide a step-through example in the recipe of how to do this, before moving trajectory data to gdata?

This article makes a good point - storing tracks (i.e. lines) is more efficient in a vector format (i.e. GeoJSON, KML etc) than storing in a raster format (i.e. Netcdf). I don't know how much we want to mess with that, but whatever we do, compressing the output will most likely save a lot of space.

@Thomas-Moore-Creative
Copy link
Collaborator

Thomas-Moore-Creative commented Jun 16, 2024

Aloha, better understanding how we can / could use zarr on Gadi is indeed an important issue right at the moment.

I need to tackle it here: Thomas-Moore-Creative/Climatology-generator-demo#12 and intend to employ Zarr ZipStore. There are apparently some limitations and important details but I can't speak to them fully until I try it myself.

A few further throw away comments for consideration:

  • zarr as a data model of future netcdf offerings is on the cards with NCzarr
  • elsewhere ( Pangeo community ) there is increased focus on getting all "big data" earth science onto "cloud" / object store workflows > example Arraylake: A Cloud-Native Data Lake Platform for Earth System Science
  • Pawsey offers significant object store capabilities linked to compute - will NCI???
  • My personal experience with zarr on Gadi /scratch is that it offers good compression and high performance ideal for turning original model output netcdf, which very often is not ready for use out of the box, into "analysis ready data" (ARD).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

No branches or pull requests

4 participants