Skip to content

Commit

Permalink
deploy: 72d6a64
Browse files Browse the repository at this point in the history
  • Loading branch information
e-marshall committed Feb 4, 2024
1 parent 20b91ef commit bef3a9b
Show file tree
Hide file tree
Showing 11 changed files with 29 additions and 29 deletions.
10 changes: 5 additions & 5 deletions PC_RTC.html
Original file line number Diff line number Diff line change
Expand Up @@ -7359,7 +7359,7 @@ <h2>STAC items<a class="headerlink" href="#stac-items" title="Permalink to this
</div>
</div>
</div>
<p>In the cell below we specify the time range we’re intereste in as well as the geographic area of interest.</p>
<p>In the cell below we specify the time range we’re interested in as well as the geographic area of interest.</p>
<div class="cell docutils container">
<div class="cell_input docutils container">
<div class="highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">time_range</span> <span class="o">=</span> <span class="s1">&#39;2021-01-01/2022-08-01&#39;</span>
Expand Down Expand Up @@ -7416,7 +7416,7 @@ <h2>STAC items<a class="headerlink" href="#stac-items" title="Permalink to this
</div>
</div>
</div>
<p>You can see that <code class="docutils literal notranslate"><span class="pre">items</span></code> is an instance of the class <code class="docutils literal notranslate"><span class="pre">ItemCollection</span></code>, and we can explore it via the embedded html interface (is that right?)</p>
<p>You can see that <code class="docutils literal notranslate"><span class="pre">items</span></code> is an instance of the class <code class="docutils literal notranslate"><span class="pre">ItemCollection</span></code>, and we can explore it via the embedded html interface.</p>
<div class="cell docutils container">
<div class="cell_input docutils container">
<div class="highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">items</span>
Expand Down Expand Up @@ -322253,7 +322253,7 @@ <h2>STAC items<a class="headerlink" href="#stac-items" title="Permalink to this
</div></div>
</div>
<p>We can also explore the object metadata outside of the table. Try typing <code class="docutils literal notranslate"><span class="pre">.assets</span></code>, <code class="docutils literal notranslate"><span class="pre">.links</span></code>, <code class="docutils literal notranslate"><span class="pre">.STAC_extensions</span></code> and <code class="docutils literal notranslate"><span class="pre">.properties</span></code> onto the term below.
You can query the object programmatically for the same metadata stored in the table using dictionary syntax on the <code class="docutils literal notranslate"><span class="pre">properties</span></code> accessor (&lt;– is that the write word here?).</p>
You can query the object programmatically for the same metadata stored in the table using dictionary syntax on the <code class="docutils literal notranslate"><span class="pre">properties</span></code> accessor.</p>
<div class="cell docutils container">
<div class="cell_input docutils container">
<div class="highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">items</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span>
Expand Down Expand Up @@ -325417,7 +325417,7 @@ <h2>Reading data using xarray<a class="headerlink" href="#reading-data-using-xar
</div>
<section id="retrieve-source-granule-id">
<h3>Retrieve source granule ID<a class="headerlink" href="#retrieve-source-granule-id" title="Permalink to this heading">#</a></h3>
<p>It will be useful to have the granule ID for the original SAR acquisition and GRD file used to generate the RTC image. The following code demonstrates retrieving the source granule ID from the STAC metadata and adding it as a variable (coorindate?) to the xarray object containing the RTC imagery.</p>
<p>It will be useful to have the granule ID for the original SAR acquisition and GRD file used to generate the RTC image. The following code demonstrates retrieving the source granule ID from the STAC metadata and adding it as a coordinate variable to the xarray object containing the RTC imagery.</p>
<div class="cell docutils container">
<div class="cell_input docutils container">
<div class="highlight-ipython3 notranslate"><div class="highlight"><pre><span></span><span class="n">stac_item</span> <span class="o">=</span> <span class="n">pystac</span><span class="o">.</span><span class="n">read_file</span><span class="p">(</span><span class="s1">&#39;https://planetarycomputer.microsoft.com/api/stac/v1/collections/sentinel-1-rtc/items/S1A_IW_GRDH_1SDV_20210602T120544_20210602T120609_038161_0480FD_rtc&#39;</span><span class="p">)</span>
Expand Down Expand Up @@ -327890,7 +327890,7 @@ <h3>Retrieve source granule ID<a class="headerlink" href="#retrieve-source-granu
| 0.00,-10.00, 3101660.00|
| 0.00, 0.00, 1.00|</dd><dt><span>resolution :</span></dt><dd>10.0</dd></dl></div></li></ul></div></div></div></div>
</div>
<p>You can see that there are 69 time steps from the Ascending orbital pass and that all of the same dimensions and coordinates still exist, so you can subset for just the <code class="docutils literal notranslate"><span class="pre">VV</span></code> data from the ascending passes, or other variables you may be itnerested in.</p>
<p>You can see that there are 69 time steps from the Ascending orbital pass and that all of the same dimensions and coordinates still exist, so you can subset for just the <code class="docutils literal notranslate"><span class="pre">VV</span></code> data from the ascending passes, or other variables you may be interested in.</p>
<p>Let’s take a look at the two polarizations side-by-side. Below, we’ll plot the <code class="docutils literal notranslate"><span class="pre">VV</span></code> and <code class="docutils literal notranslate"><span class="pre">VH</span></code> polarizations from the same date next to each other:</p>
<div class="cell docutils container">
<div class="cell_input docutils container">
Expand Down
10 changes: 5 additions & 5 deletions _sources/PC_RTC.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7036,7 +7036,7 @@
"id": "4cd26316-b0cf-4ae8-a839-1cf619714e3f",
"metadata": {},
"source": [
"In the cell below we specify the time range we're intereste in as well as the geographic area of interest."
"In the cell below we specify the time range we're interested in as well as the geographic area of interest."
]
},
{
Expand Down Expand Up @@ -7142,7 +7142,7 @@
"id": "450d2505-8797-456a-bd8a-97d7357ab785",
"metadata": {},
"source": [
"You can see that `items` is an instance of the class `ItemCollection`, and we can explore it via the embedded html interface (is that right?)"
"You can see that `items` is an instance of the class `ItemCollection`, and we can explore it via the embedded html interface."
]
},
{
Expand Down Expand Up @@ -322169,7 +322169,7 @@
"metadata": {},
"source": [
"We can also explore the object metadata outside of the table. Try typing `.assets`, `.links`, `.STAC_extensions` and `.properties` onto the term below. \n",
"You can query the object programmatically for the same metadata stored in the table using dictionary syntax on the `properties` accessor (<-- is that the write word here?)."
"You can query the object programmatically for the same metadata stored in the table using dictionary syntax on the `properties` accessor."
]
},
{
Expand Down Expand Up @@ -325450,7 +325450,7 @@
"source": [
"### Retrieve source granule ID\n",
"\n",
"It will be useful to have the granule ID for the original SAR acquisition and GRD file used to generate the RTC image. The following code demonstrates retrieving the source granule ID from the STAC metadata and adding it as a variable (coorindate?) to the xarray object containing the RTC imagery."
"It will be useful to have the granule ID for the original SAR acquisition and GRD file used to generate the RTC image. The following code demonstrates retrieving the source granule ID from the STAC metadata and adding it as a coordinate variable to the xarray object containing the RTC imagery."
]
},
{
Expand Down Expand Up @@ -328163,7 +328163,7 @@
"id": "da726ad2-2949-46e5-be7f-cc2ff9061280",
"metadata": {},
"source": [
"You can see that there are 69 time steps from the Ascending orbital pass and that all of the same dimensions and coordinates still exist, so you can subset for just the `VV` data from the ascending passes, or other variables you may be itnerested in.\n",
"You can see that there are 69 time steps from the Ascending orbital pass and that all of the same dimensions and coordinates still exist, so you can subset for just the `VV` data from the ascending passes, or other variables you may be interested in.\n",
"\n",
"Let's take a look at the two polarizations side-by-side. Below, we'll plot the `VV` and `VH` polarizations from the same date next to each other:"
]
Expand Down
2 changes: 1 addition & 1 deletion _sources/appendix.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ However, for this dataset, I found that the `xr.open_mfdataset()` function wasn'
The stack I used contains multiple scenes that cover the same area of interest (multiple viewing geometries). If you wanted to select only scenes from a single viewing geometry at the expense of a denser time series, `xr.open_mfdataset()` might work a bit better (I didn't try this so cannot say for sure)
```

Ultimately, I decided to use the approach of creating GDAL VRT objects, and reading those in with `rioxarray.open_rasterio()` to organize the data as xarray objects. This worked much better from a memory perspective but created much more work with organizing metadata and structuring the dataset in an analysis-ready format. The `xr.open_mfdataset()` function seems like a much more efficient approach if your dataset is well-aligned with its parameters (ie. a spatially uniform stack). While it did not end up being the best tool for this task, I decided to include the notebook with the `xr.open_mfdataset()` approach anyway in case it is useful to see a demonstration of this function. I learned a lot about how to structure a `preprocess` function and many other steps working on this example.
Ultimately, I decided to use the approach of creating GDAL VRT objects, and reading those in with `rioxarray.open_rasterio()` to organize the data as xarray objects. This worked much better from a memory perspective but created much more work with organizing metadata and structuring the dataset in an analysis-ready format. The `xr.open_mfdataset()` function seems like a much more efficient approach if your dataset is well-aligned with its parameters (ie. a spatially uniform stack). While it did not end up being the best tool for this task, I decided to include the notebook with the `xr.open_mfdataset()` approach anyway, in case it is useful to see a demonstration of this function. I learned a lot about how to structure a `preprocess` function and many other steps working on this example.

Take a look at the notebook using `xr.open_mfdataset()` to read in stacks of ASF-processed Sentinel-1 RTC imagery files [here](asf_local_mf.ipynb)

Expand Down
4 changes: 2 additions & 2 deletions _sources/asf_inspect.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1251,7 +1251,7 @@
"id": "05454a5d-373f-4620-aba6-cd1fe2ff0820",
"metadata": {},
"source": [
"It looks like there are areas affected by different types of distortion on different dates. For example, in the lower left quadrant, there is a region that is blue (5 - affected by layover) on 6/7/2010 but much of that area appears to be in radar shadow on 6/10. This pattern is present throughout much of the scene with portions of area that are affected by layover in one acquisition in shadow in the next acquisition. This is due to different viewing geometries on different orbital passes: one of the above scenes was likely collected during an ascending pass and one during a descending.\n",
"It looks like there are areas affected by different types of distortion on different dates. For example, in the lower left quadrant, there is a region that is blue (5 - affected by layover) on 6/7/2021 but much of that area appears to be in radar shadow on 6/10/2021. This pattern is present throughout much of the scene with portions of area that are affected by layover in one acquisition in shadow in the next acquisition. This is due to different viewing geometries on different orbital passes: one of the above scenes was likely collected during an ascending pass and one during a descending.\n",
"\n",
"Thanks to all the setup work we did in the previous notebook, we can confirm that: "
]
Expand Down Expand Up @@ -2744,7 +2744,7 @@
"id": "68f545c1-05c9-4f1e-b82f-5712e2a2f042",
"metadata": {},
"source": [
"Interesting, it looks like the winter and spring composites are relatively similar to one another. There also appears to be a decrease in backscatter mvoing from spring to summer. "
"Interesting, it looks like the winter and spring composites are relatively similar to one another. There also appears to be a decrease in backscatter moving from spring to summer. "
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions _sources/asf_local_vrt.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -165,7 +165,7 @@
"source": [
"### Read in vector data \n",
"\n",
"We will use this vector data object later but we will read it in as a `geopandas.GeoDataFrame` object now. It is called PC aoi because the GeoJSON file was created off of the spatial extent of the SAR dat we will access from Microsoft Planetary Computer (PC), in order to have comparable datasets.\n"
"We will use this vector data object later but we will read it in as a `geopandas.GeoDataFrame` object now. It is called PC aoi because the GeoJSON file was created off of the spatial extent of the SAR data we will access from Microsoft Planetary Computer (PC), in order to have comparable datasets.\n"
]
},
{
Expand Down Expand Up @@ -2933,7 +2933,7 @@
"id": "115fdb7b-df78-4e5e-8761-a987cfe14b2c",
"metadata": {},
"source": [
"Clip the full object by the same AOI as above. Just as in the last notebook, we will use the [`rioxarray.clip()` method](https://corteva.github.io/rioxarray/stable/examples/clip_geom.html)."
"Clip the full object by the same AOI as above. We will use the [`rioxarray.clip()` method](https://corteva.github.io/rioxarray/stable/examples/clip_geom.html)."
]
},
{
Expand Down
8 changes: 4 additions & 4 deletions _sources/dataset_comparison.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@
"\n",
"We can use the `storemagic` command `%store` to retrieve the variable we constructed and saved in a previous notebook, rather than having to create it again. Read more about this [here](https://levelup.gitconnected.com/how-to-store-variables-in-jupyter-notebook-fea8aa60a9b)\n",
"\n",
"This let's use call the ASF dataset (`vrt_new`) and PC dataset (`da_pc`)"
"This let's us call the ASF dataset (`vrt_new`) and PC dataset (`da_pc`)"
]
},
{
Expand Down Expand Up @@ -2090,7 +2090,7 @@
"source": [
"## Extract common data take ID from granule IDs\n",
"\n",
"We want to ensure that we are performing a direct comparison of the ASF and PC datasets. To do this, we would like to use the acquisition ID that is stored in the source granule name (published by ESA). In the setup notebooks we attached the entire granule IDs of the SLC images to the ASF dataset and the GRD images to the PC dataset. In the ASF data inspection notebook, we attached data take id as a non-dimensional coordianet. Now we will do the same for the Planetary Computer dataset, extracting just the 6-digit acquisition ID from the granule ID and using this for a scene-by-scene comparison."
"We want to ensure that we are performing a direct comparison of the ASF and PC datasets. To do this, we would like to use the acquisition ID that is stored in the source granule name (published by ESA). In the setup notebooks we attached the entire granule IDs of the SLC images to the ASF dataset and the GRD images to the PC dataset. In the ASF data inspection notebook, we attached data take id as a non-dimensional coordinate. Now we will do the same for the Planetary Computer dataset, extracting just the 6-digit acquisition ID from the granule ID and using this for a scene-by-scene comparison."
]
},
{
Expand Down Expand Up @@ -4026,7 +4026,7 @@
"id": "2b49d742-a17a-4665-8387-a86a83bfef4d",
"metadata": {},
"source": [
"Now we have data take ID coordinates for both datasets. We want to find the common data take IDs between the two datasets. TO do this, I extract a list of the acquisition IDs (`data_take_id`) for both datasets and then find the intersection of the two lists (the list object `common_data_takes`)"
"Now we have data take ID coordinates for both datasets. We want to find the common data take IDs between the two datasets. To do this, I extract a list of the acquisition IDs (`data_take_id`) for both datasets and then find the intersection of the two lists (the list object `common_data_takes`)"
]
},
{
Expand Down Expand Up @@ -4067,7 +4067,7 @@
"id": "4ca6f9fe-5ccb-4b8e-b3b9-c16012fcbcfc",
"metadata": {},
"source": [
"It looks like there are 84 RTC images that are generated from common acquisitions between the two datasets"
"It looks like there are 83 RTC images that are generated from common acquisitions between the two datasets"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion appendix.html
Original file line number Diff line number Diff line change
Expand Up @@ -382,7 +382,7 @@ <h3><code class="docutils literal notranslate"><span class="pre">xr.open_mfdatas
<p class="admonition-title">Note</p>
<p>The stack I used contains multiple scenes that cover the same area of interest (multiple viewing geometries). If you wanted to select only scenes from a single viewing geometry at the expense of a denser time series, <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> might work a bit better (I didn’t try this so cannot say for sure)</p>
</div>
<p>Ultimately, I decided to use the approach of creating GDAL VRT objects, and reading those in with <code class="docutils literal notranslate"><span class="pre">rioxarray.open_rasterio()</span></code> to organize the data as xarray objects. This worked much better from a memory perspective but created much more work with organizing metadata and structuring the dataset in an analysis-ready format. The <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> function seems like a much more efficient approach if your dataset is well-aligned with its parameters (ie. a spatially uniform stack). While it did not end up being the best tool for this task, I decided to include the notebook with the <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> approach anyway in case it is useful to see a demonstration of this function. I learned a lot about how to structure a <code class="docutils literal notranslate"><span class="pre">preprocess</span></code> function and many other steps working on this example.</p>
<p>Ultimately, I decided to use the approach of creating GDAL VRT objects, and reading those in with <code class="docutils literal notranslate"><span class="pre">rioxarray.open_rasterio()</span></code> to organize the data as xarray objects. This worked much better from a memory perspective but created much more work with organizing metadata and structuring the dataset in an analysis-ready format. The <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> function seems like a much more efficient approach if your dataset is well-aligned with its parameters (ie. a spatially uniform stack). While it did not end up being the best tool for this task, I decided to include the notebook with the <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> approach anyway, in case it is useful to see a demonstration of this function. I learned a lot about how to structure a <code class="docutils literal notranslate"><span class="pre">preprocess</span></code> function and many other steps working on this example.</p>
<p>Take a look at the notebook using <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> to read in stacks of ASF-processed Sentinel-1 RTC imagery files <a class="reference internal" href="asf_local_mf.html"><span class="doc std std-doc">here</span></a></p>
<p>In addition to the documentation linked above, some other useful resources for <code class="docutils literal notranslate"><span class="pre">xr.open_mfdataset()</span></code> that I found are:</p>
<ul class="simple">
Expand Down
Loading

0 comments on commit bef3a9b

Please sign in to comment.