Skip to content

Commit

Permalink
fix types that had been fixed in earlier commit, did they get lost in…
Browse files Browse the repository at this point in the history
… merge issue?
  • Loading branch information
e-marshall committed Feb 4, 2024
1 parent 98bfc94 commit 72d6a64
Show file tree
Hide file tree
Showing 5 changed files with 14 additions and 14 deletions.
10 changes: 5 additions & 5 deletions PC_RTC.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7036,7 +7036,7 @@
"id": "4cd26316-b0cf-4ae8-a839-1cf619714e3f",
"metadata": {},
"source": [
"In the cell below we specify the time range we're intereste in as well as the geographic area of interest."
"In the cell below we specify the time range we're interested in as well as the geographic area of interest."
]
},
{
Expand Down Expand Up @@ -7142,7 +7142,7 @@
"id": "450d2505-8797-456a-bd8a-97d7357ab785",
"metadata": {},
"source": [
"You can see that `items` is an instance of the class `ItemCollection`, and we can explore it via the embedded html interface (is that right?)"
"You can see that `items` is an instance of the class `ItemCollection`, and we can explore it via the embedded html interface."
]
},
{
Expand Down Expand Up @@ -322169,7 +322169,7 @@
"metadata": {},
"source": [
"We can also explore the object metadata outside of the table. Try typing `.assets`, `.links`, `.STAC_extensions` and `.properties` onto the term below. \n",
"You can query the object programmatically for the same metadata stored in the table using dictionary syntax on the `properties` accessor (<-- is that the write word here?)."
"You can query the object programmatically for the same metadata stored in the table using dictionary syntax on the `properties` accessor."
]
},
{
Expand Down Expand Up @@ -325450,7 +325450,7 @@
"source": [
"### Retrieve source granule ID\n",
"\n",
"It will be useful to have the granule ID for the original SAR acquisition and GRD file used to generate the RTC image. The following code demonstrates retrieving the source granule ID from the STAC metadata and adding it as a variable (coorindate?) to the xarray object containing the RTC imagery."
"It will be useful to have the granule ID for the original SAR acquisition and GRD file used to generate the RTC image. The following code demonstrates retrieving the source granule ID from the STAC metadata and adding it as a coordinate variable to the xarray object containing the RTC imagery."
]
},
{
Expand Down Expand Up @@ -328163,7 +328163,7 @@
"id": "da726ad2-2949-46e5-be7f-cc2ff9061280",
"metadata": {},
"source": [
"You can see that there are 69 time steps from the Ascending orbital pass and that all of the same dimensions and coordinates still exist, so you can subset for just the `VV` data from the ascending passes, or other variables you may be itnerested in.\n",
"You can see that there are 69 time steps from the Ascending orbital pass and that all of the same dimensions and coordinates still exist, so you can subset for just the `VV` data from the ascending passes, or other variables you may be interested in.\n",
"\n",
"Let's take a look at the two polarizations side-by-side. Below, we'll plot the `VV` and `VH` polarizations from the same date next to each other:"
]
Expand Down
2 changes: 1 addition & 1 deletion appendix.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ However, for this dataset, I found that the `xr.open_mfdataset()` function wasn'
The stack I used contains multiple scenes that cover the same area of interest (multiple viewing geometries). If you wanted to select only scenes from a single viewing geometry at the expense of a denser time series, `xr.open_mfdataset()` might work a bit better (I didn't try this so cannot say for sure)
```

Ultimately, I decided to use the approach of creating GDAL VRT objects, and reading those in with `rioxarray.open_rasterio()` to organize the data as xarray objects. This worked much better from a memory perspective but created much more work with organizing metadata and structuring the dataset in an analysis-ready format. The `xr.open_mfdataset()` function seems like a much more efficient approach if your dataset is well-aligned with its parameters (ie. a spatially uniform stack). While it did not end up being the best tool for this task, I decided to include the notebook with the `xr.open_mfdataset()` approach anyway in case it is useful to see a demonstration of this function. I learned a lot about how to structure a `preprocess` function and many other steps working on this example.
Ultimately, I decided to use the approach of creating GDAL VRT objects, and reading those in with `rioxarray.open_rasterio()` to organize the data as xarray objects. This worked much better from a memory perspective but created much more work with organizing metadata and structuring the dataset in an analysis-ready format. The `xr.open_mfdataset()` function seems like a much more efficient approach if your dataset is well-aligned with its parameters (ie. a spatially uniform stack). While it did not end up being the best tool for this task, I decided to include the notebook with the `xr.open_mfdataset()` approach anyway, in case it is useful to see a demonstration of this function. I learned a lot about how to structure a `preprocess` function and many other steps working on this example.

Take a look at the notebook using `xr.open_mfdataset()` to read in stacks of ASF-processed Sentinel-1 RTC imagery files [here](asf_local_mf.ipynb)

Expand Down
4 changes: 2 additions & 2 deletions asf_inspect.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1251,7 +1251,7 @@
"id": "05454a5d-373f-4620-aba6-cd1fe2ff0820",
"metadata": {},
"source": [
"It looks like there are areas affected by different types of distortion on different dates. For example, in the lower left quadrant, there is a region that is blue (5 - affected by layover) on 6/7/2010 but much of that area appears to be in radar shadow on 6/10. This pattern is present throughout much of the scene with portions of area that are affected by layover in one acquisition in shadow in the next acquisition. This is due to different viewing geometries on different orbital passes: one of the above scenes was likely collected during an ascending pass and one during a descending.\n",
"It looks like there are areas affected by different types of distortion on different dates. For example, in the lower left quadrant, there is a region that is blue (5 - affected by layover) on 6/7/2021 but much of that area appears to be in radar shadow on 6/10/2021. This pattern is present throughout much of the scene with portions of area that are affected by layover in one acquisition in shadow in the next acquisition. This is due to different viewing geometries on different orbital passes: one of the above scenes was likely collected during an ascending pass and one during a descending.\n",
"\n",
"Thanks to all the setup work we did in the previous notebook, we can confirm that: "
]
Expand Down Expand Up @@ -2744,7 +2744,7 @@
"id": "68f545c1-05c9-4f1e-b82f-5712e2a2f042",
"metadata": {},
"source": [
"Interesting, it looks like the winter and spring composites are relatively similar to one another. There also appears to be a decrease in backscatter mvoing from spring to summer. "
"Interesting, it looks like the winter and spring composites are relatively similar to one another. There also appears to be a decrease in backscatter moving from spring to summer. "
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions asf_local_vrt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@
"source": [
"### Read in vector data \n",
"\n",
"We will use this vector data object later but we will read it in as a `geopandas.GeoDataFrame` object now. It is called PC aoi because the GeoJSON file was created off of the spatial extent of the SAR dat we will access from Microsoft Planetary Computer (PC), in order to have comparable datasets.\n"
"We will use this vector data object later but we will read it in as a `geopandas.GeoDataFrame` object now. It is called PC aoi because the GeoJSON file was created off of the spatial extent of the SAR data we will access from Microsoft Planetary Computer (PC), in order to have comparable datasets.\n"
]
},
{
Expand Down Expand Up @@ -2933,7 +2933,7 @@
"id": "115fdb7b-df78-4e5e-8761-a987cfe14b2c",
"metadata": {},
"source": [
"Clip the full object by the same AOI as above. Just as in the last notebook, we will use the [`rioxarray.clip()` method](https://corteva.github.io/rioxarray/stable/examples/clip_geom.html)."
"Clip the full object by the same AOI as above. We will use the [`rioxarray.clip()` method](https://corteva.github.io/rioxarray/stable/examples/clip_geom.html)."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions dataset_comparison.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@
"\n",
"We can use the `storemagic` command `%store` to retrieve the variable we constructed and saved in a previous notebook, rather than having to create it again. Read more about this [here](https://levelup.gitconnected.com/how-to-store-variables-in-jupyter-notebook-fea8aa60a9b)\n",
"\n",
"This let's use call the ASF dataset (`vrt_new`) and PC dataset (`da_pc`)"
"This let's us call the ASF dataset (`vrt_new`) and PC dataset (`da_pc`)"
]
},
{
Expand Down Expand Up @@ -2090,7 +2090,7 @@
"source": [
"## Extract common data take ID from granule IDs\n",
"\n",
"We want to ensure that we are performing a direct comparison of the ASF and PC datasets. To do this, we would like to use the acquisition ID that is stored in the source granule name (published by ESA). In the setup notebooks we attached the entire granule IDs of the SLC images to the ASF dataset and the GRD images to the PC dataset. In the ASF data inspection notebook, we attached data take id as a non-dimensional coordianet. Now we will do the same for the Planetary Computer dataset, extracting just the 6-digit acquisition ID from the granule ID and using this for a scene-by-scene comparison."
"We want to ensure that we are performing a direct comparison of the ASF and PC datasets. To do this, we would like to use the acquisition ID that is stored in the source granule name (published by ESA). In the setup notebooks we attached the entire granule IDs of the SLC images to the ASF dataset and the GRD images to the PC dataset. In the ASF data inspection notebook, we attached data take id as a non-dimensional coordinate. Now we will do the same for the Planetary Computer dataset, extracting just the 6-digit acquisition ID from the granule ID and using this for a scene-by-scene comparison."
]
},
{
Expand Down Expand Up @@ -4026,7 +4026,7 @@
"id": "2b49d742-a17a-4665-8387-a86a83bfef4d",
"metadata": {},
"source": [
"Now we have data take ID coordinates for both datasets. We want to find the common data take IDs between the two datasets. TO do this, I extract a list of the acquisition IDs (`data_take_id`) for both datasets and then find the intersection of the two lists (the list object `common_data_takes`)"
"Now we have data take ID coordinates for both datasets. We want to find the common data take IDs between the two datasets. To do this, I extract a list of the acquisition IDs (`data_take_id`) for both datasets and then find the intersection of the two lists (the list object `common_data_takes`)"
]
},
{
Expand Down Expand Up @@ -4067,7 +4067,7 @@
"id": "4ca6f9fe-5ccb-4b8e-b3b9-c16012fcbcfc",
"metadata": {},
"source": [
"It looks like there are 84 RTC images that are generated from common acquisitions between the two datasets"
"It looks like there are 83 RTC images that are generated from common acquisitions between the two datasets"
]
},
{
Expand Down

0 comments on commit 72d6a64

Please sign in to comment.