From 20f9c5db99812bdb011908539b104d325d203f23 Mon Sep 17 00:00:00 2001 From: Rob Baker Date: Thu, 18 Apr 2024 16:16:51 -0600 Subject: [PATCH 1/4] auto update via pkgdown and devtools --- docs/news/index.html | 172 -------------------- docs/pkgdown.yml | 2 +- docs/reference/convert_datetime_format.html | 1 + docs/reference/fix_utc_offset.html | 1 + docs/sitemap.xml | 114 ------------- man/QCkit-package.Rd | 2 +- 6 files changed, 4 insertions(+), 288 deletions(-) delete mode 100644 docs/news/index.html delete mode 100644 docs/sitemap.xml diff --git a/docs/news/index.html b/docs/news/index.html deleted file mode 100644 index 9bb61f5..0000000 --- a/docs/news/index.html +++ /dev/null @@ -1,172 +0,0 @@ - -Changelog • QCkit - - -
-
- - - -
-
- - -
- -

2024-04-17 * Major updates to the DRR template including: using snake case instead of camel case for variables; updating Table 3 to only display filenames only when there are multiple files, fixed multiple issues with footnotes, added citations to NPSdataverse packages, added a section that prints the R code needed to download the data package and load it in to R. * Updated the DRR documentation to account for new variable names.

-
-
- -

2024-03-07 * Update error warning in check_te() to not reference VPN since NPS no longer uses VPN. * add private function .get_unit_bondaries(): hits ArcGIS API to pull more precise park unit boundaries than get_park_polgyon() * add validate_coord_list() function that takes advantage of improved precision of .get_unit_boundaries() and is vectorized, enabling users to input multiple coordinate combinations and park units directly from a data frame.

-
-
- -

2024-02-09 * This version adds the DRR template, example files, and associated documentation to the QCkit package. * Bugfix in get_custom_flag(): it was counting both A (accepted) and AE (Accepted, estimated) as Accepted. Fixed the regex such that it Accepted will include all cells that start with A followed by nothing or by any character except AE such that flags can have explanation codes added to them (e.g. A_jenkins if “Jenkins” flagged the data as accepted)

-
-
- -

2024-01-23 * Maintenance on get_custom_flag() to align with updated DRR requirements * Added function replace_blanks() to ingest a directory of .csvs and write them back out to .csv (overwriting the original files) with blanks converted to NA (except if a file has NO data - then it remains blank and needs to be dealt with manually)

-
-
- -

2023-11-20 * Added the function create_datastore_scipt(), which given a username and repo for GitHub will generate a draft Script Reference on DataStore based on the information found in the latest Release on GitHub.

-

24 April 2023 * fixed bug in get_custom_flags().

-
-
- -

17 April 2023

-
  • -get_elevation() new function for getting elevation from GPS coordinates via USGS API.
  • -

21 March 2023

-
  • -order_cols new function for ordering columns added
  • -

16 March 2023

-
-
- -

28 February 2023

-
  • -te_check() bug fix - exact column name filtering allows for multiple columns with similar names in the input data column. Improved documentation for transparency.
  • -

23 February 2023

-
  • updated te_check(). It now supports searching multiple park units.
  • -

22 February 2023

-
  • updated te_check(). Now prints the source of the federal match list data and the date it was accessed to the console. Made the output format prettier. Added an “expansion” option to the function. Defaults to expansion = FALSE, which checks for exact matches between the scientific binomial supplied by the user and the full scientific binomial in the matchlist. When expansion = TRUE, the genera in the data supplied will be checked against the matchlist and all species from a given genera will be returned, regardless of whether a given species is actually in the supplied data set. A new column “InData” will tell the user whether a given species is actually in their data or has been expanded to.
  • -

-
- -

02 February 2023

-
  • Fixed a major bug in te_check() that was causing the function return species that were not threatened or endangered. The function now returns a tibble containing all species that are threatened, endangered, or considered for listing, specifies the status code of each species, and then give a brief explanation of the federal endangered species act status code returned.
  • -

-
- -
  • deprecated get_dp_flags(), get_df_flags(), and get_dc_flags in favor of get_custom_flags(). The new get_custom_flags() function returns 1-3 data frames, depending on user input that contain the output of the 3 previous functions. It also allows the user to specify additional non-flagged columns to be included in the QC summary. -
    • Marked get_custom_flags() as experimental.
    • -
    • Removed “force” option and removed final print statement
    • -
    • Reduced number of summary columns reported
    • -
    • fixed RRU calculation to be (A+AE)/(A+AE+P+R+NA) instead of (A+E)/(A+AE+P+R)
    • -
  • -
-
- -
  • Added 3 functions to summarize data quality flags: -
    • -get_dp_flags() returns counts of each data flag (A, AE, R, P) across the whole data package (as well as all cells in the data package).
    • -
    • -get_df_flags() returns counts of data flags within each data file of the data package (as well as counts for all cells within the data package).
    • -
    • -get_dc_flags() returns the name of each flagging column within each data package and the count of each flag within each column as well as the total number of cells across all the data flagging columns.
    • -
    • Each function has a force option that defaults to force = FALSE and prints the results to the screen. setting force = TRUE will suppress the on-screen output.
    • -
  • -
-
- -
  • Added a NEWS.md file to track changes to the package.
  • -
-
- - - -
- - -
- -
-

Site built with pkgdown 2.0.7.

-
- -
- - - - - - - - diff --git a/docs/pkgdown.yml b/docs/pkgdown.yml index 686a438..c8df280 100644 --- a/docs/pkgdown.yml +++ b/docs/pkgdown.yml @@ -5,5 +5,5 @@ articles: DRR_Purpose_and_Scope: DRR_Purpose_and_Scope.html Starting-a-DRR: Starting-a-DRR.html Using-the-DRR-Template: Using-the-DRR-Template.html -last_built: 2024-04-17T16:46Z +last_built: 2024-04-17T16:57Z diff --git a/docs/reference/convert_datetime_format.html b/docs/reference/convert_datetime_format.html index ff5fa6b..fac83d0 100644 --- a/docs/reference/convert_datetime_format.html +++ b/docs/reference/convert_datetime_format.html @@ -96,6 +96,7 @@

Details

Examples

convert_datetime_format("MM/DD/YYYY")
+#> Warning: restarting interrupted promise evaluation
 #> Warning: internal error -3 in R_decompress1
 #> Error in eval(expr, envir, enclos): lazy-load database 'C:/Users/rlbaker/AppData/Local/R/win-library/4.3/QCkit/R/QCkit.rdb' is corrupt
 convert_datetime_format(c("MM/DD/YYYY", "YY-MM-DD"))
diff --git a/docs/reference/fix_utc_offset.html b/docs/reference/fix_utc_offset.html
index a0bcce1..b51a6ea 100644
--- a/docs/reference/fix_utc_offset.html
+++ b/docs/reference/fix_utc_offset.html
@@ -90,6 +90,7 @@ 

Examples

datetimes <- c("2023-11-16T03:32:49+07:00", "2023-11-16T03:32:49-07",
 "2023-11-16T03:32:49","2023-11-16T03:32:49Z")
 fix_utc_offset(datetimes)
+#> Warning: restarting interrupted promise evaluation
 #> Warning: internal error -3 in R_decompress1
 #> Error in eval(expr, envir, enclos): lazy-load database 'C:/Users/rlbaker/AppData/Local/R/win-library/4.3/QCkit/R/QCkit.rdb' is corrupt
 
diff --git a/docs/sitemap.xml b/docs/sitemap.xml
deleted file mode 100644
index 178dc77..0000000
--- a/docs/sitemap.xml
+++ /dev/null
@@ -1,114 +0,0 @@
-
-
-  
-    /404.html
-  
-  
-    /articles/DRR_Purpose_and_Scope.html
-  
-  
-    /articles/index.html
-  
-  
-    /articles/Starting-a-DRR.html
-  
-  
-    /articles/Using-the-DRR-Template.html
-  
-  
-    /authors.html
-  
-  
-    /index.html
-  
-  
-    /LICENSE-text.html
-  
-  
-    /LICENSE.html
-  
-  
-    /news/index.html
-  
-  
-    /reference/check_dc_cols.html
-  
-  
-    /reference/check_te.html
-  
-  
-    /reference/convert_datetime_format.html
-  
-  
-    /reference/convert_long_to_utm.html
-  
-  
-    /reference/convert_utm_to_ll.html
-  
-  
-    /reference/create_datastore_script.html
-  
-  
-    /reference/DC_col_check.html
-  
-  
-    /reference/dot-get_unit_boundary.html
-  
-  
-    /reference/fix_utc_offset.html
-  
-  
-    /reference/fuzz_location.html
-  
-  
-    /reference/get_custom_flags.html
-  
-  
-    /reference/get_dc_flags.html
-  
-  
-    /reference/get_df_flags.html
-  
-  
-    /reference/get_dp_flags.html
-  
-  
-    /reference/get_elevation.html
-  
-  
-    /reference/get_park_polygon.html
-  
-  
-    /reference/get_taxon_rank.html
-  
-  
-    /reference/get_utm_zone.html
-  
-  
-    /reference/index.html
-  
-  
-    /reference/long2UTM.html
-  
-  
-    /reference/order_cols.html
-  
-  
-    /reference/QCkit-package.html
-  
-  
-    /reference/replace_blanks.html
-  
-  
-    /reference/te_check.html
-  
-  
-    /reference/utm_to_ll.html
-  
-  
-    /reference/validate_coord.html
-  
-  
-    /reference/validate_coord_list.html
-  
-
diff --git a/man/QCkit-package.Rd b/man/QCkit-package.Rd
index 8d4991a..67d4e4b 100644
--- a/man/QCkit-package.Rd
+++ b/man/QCkit-package.Rd
@@ -28,7 +28,7 @@ Authors:
 
 Other contributors:
 \itemize{
-  \item Sarah Kelson [contributor]
+  \item Sarah Kelso (\href{https://orcid.org/0009-0002-8468-6945}{ORCID}) [contributor]
   \item Amy Sherman (\href{https://orcid.org/0000-0002-9276-0087}{ORCID}) [contributor]
 }
 

From 14de4a6788e8d78e2a451ec46ffb9cca0a2bde63 Mon Sep 17 00:00:00 2001
From: Rob Baker 
Date: Fri, 19 Apr 2024 09:30:30 -0600
Subject: [PATCH 2/4] Add sarah wright as author

---
 DESCRIPTION | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/DESCRIPTION b/DESCRIPTION
index b464bca..3322540 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -52,7 +52,10 @@ Imports:
     jsonlite,
     here,
     tibble,
-    tidyselect
+    tidyselect,
+    glue,
+    sp,
+    withr
 RoxygenNote: 7.3.1
 Suggests: 
     knitr,

From 650bf7db5fc39047e5c7e6bc5e2aa80c2fb5d4dc Mon Sep 17 00:00:00 2001
From: Rob Baker 
Date: Fri, 19 Apr 2024 09:31:52 -0600
Subject: [PATCH 3/4] autoupdate via pkgdown and devtools

---
 DESCRIPTION                                 |   4 +
 NAMESPACE                                   |   1 +
 R/geography.R                               | 141 ++++-
 R/utils.R                                   |   6 +-
 Untitled/Untitled.Rmd                       | 589 --------------------
 docs/404.html                               |   2 +-
 docs/LICENSE-text.html                      |   2 +-
 docs/LICENSE.html                           |   2 +-
 docs/articles/DRR_Purpose_and_Scope.html    |   2 +-
 docs/articles/Starting-a-DRR.html           |   2 +-
 docs/articles/Using-the-DRR-Template.html   |   2 +-
 docs/articles/index.html                    |   2 +-
 docs/authors.html                           |  10 +-
 docs/index.html                             |   3 +-
 docs/news/index.html                        | 172 ++++++
 docs/pkgdown.yml                            |   2 +-
 docs/reference/DC_col_check.html            |   2 +-
 docs/reference/QCkit-package.html           |   4 +-
 docs/reference/check_dc_cols.html           |   2 +-
 docs/reference/check_te.html                |   2 +-
 docs/reference/convert_datetime_format.html |  10 +-
 docs/reference/convert_long_to_utm.html     |   2 +-
 docs/reference/convert_utm_to_ll.html       |  18 +-
 docs/reference/create_datastore_script.html |   2 +-
 docs/reference/dot-get_unit_boundary.html   |   2 +-
 docs/reference/fix_utc_offset.html          |   8 +-
 docs/reference/fuzz_location.html           |   2 +-
 docs/reference/generate_ll_from_utm.html    | 200 +++++++
 docs/reference/get_custom_flags.html        |   2 +-
 docs/reference/get_dc_flags.html            |   2 +-
 docs/reference/get_df_flags.html            |   2 +-
 docs/reference/get_dp_flags.html            |   2 +-
 docs/reference/get_elevation.html           |   2 +-
 docs/reference/get_park_polygon.html        |   2 +-
 docs/reference/get_taxon_rank.html          |   2 +-
 docs/reference/get_utm_zone.html            |   2 +-
 docs/reference/index.html                   |   6 +-
 docs/reference/long2UTM.html                |   2 +-
 docs/reference/order_cols.html              |   2 +-
 docs/reference/replace_blanks.html          |   2 +-
 docs/reference/te_check.html                |   2 +-
 docs/reference/utm_to_ll.html               |   2 +-
 docs/reference/validate_coord.html          |   2 +-
 docs/reference/validate_coord_list.html     |   2 +-
 docs/sitemap.xml                            | 117 ++++
 man/convert_utm_to_ll.Rd                    |   6 +
 man/generate_ll_from_utm.Rd                 |  91 +++
 tests/testthat/test-geography.R             |  75 ++-
 48 files changed, 876 insertions(+), 645 deletions(-)
 delete mode 100644 Untitled/Untitled.Rmd
 create mode 100644 docs/news/index.html
 create mode 100644 docs/reference/generate_ll_from_utm.html
 create mode 100644 docs/sitemap.xml
 create mode 100644 man/generate_ll_from_utm.Rd

diff --git a/DESCRIPTION b/DESCRIPTION
index 3322540..93413c6 100644
--- a/DESCRIPTION
+++ b/DESCRIPTION
@@ -20,6 +20,10 @@ Authors@R: c(
            family = "Quevedo",
            role = "aut",
            comment = c(ORCID = "0000-0003-0129-981X")),
+  person(given = "sarah",
+           family = "Wright",
+           role = "aut",
+           comment = c(ORCID = "0009-0004-5060-2189")),
   person(given = "Sarah",
            family = "Kelso",
            role = "ctb",
diff --git a/NAMESPACE b/NAMESPACE
index ba89e22..61de488 100644
--- a/NAMESPACE
+++ b/NAMESPACE
@@ -9,6 +9,7 @@ export(convert_utm_to_ll)
 export(create_datastore_script)
 export(fix_utc_offset)
 export(fuzz_location)
+export(generate_ll_from_utm)
 export(get_custom_flags)
 export(get_dc_flags)
 export(get_df_flags)
diff --git a/R/geography.R b/R/geography.R
index f10f6dd..767670d 100644
--- a/R/geography.R
+++ b/R/geography.R
@@ -334,7 +334,142 @@ fuzz_location <- function(lat,
 
 #' Coordinate Conversion from UTM to Latitude and Longitude
 #'
-#' @description `convert_utm_to_ll()` takes your dataframe with UTM coordinates
+#' @description `generate_ll_from_utm()` takes your dataframe with UTM coordinates
+#' in separate Easting and Northing columns, and adds on an additional two
+#' columns with the converted decimalLatitude and decimalLongitude coordinates
+#' using the reference coordinate system NAD83. Your data must also contain columns
+#' specifying the zone and datum of your UTM coordinates.
+#' In contrast to `convert_utm_to_ll()` (superseded), `generate_ll_from_utm()` requires
+#' zone and datum columns. It supports quoted or unquoted column names and a user-specified datum for lat/long
+#' coordinates. It also adds an extra column to the output data table that documents the
+#' lat/long coordinate reference system.
+#'
+#' @details Define the name of your dataframe, the easting and northing columns
+#' within it, the UTM zone within which those coordinates are located, and the
+#' reference coordinate system (datum). UTM Northing and Easting columns must be
+#' in separate columns prior to running the function. If a datum for the lat/long output
+#' is not defined, the function will default to "NAD83". If there are missing coordinates in
+#' your dataframe they will be preserved, however they will be moved to the end
+#' of your dataframe. Note that some parameter names are not in snake_case but
+#' instead reflect DarwinCore naming conventions.
+#'
+#' @param df - The dataframe with UTM coordinates you would like to convert.
+#' Input the name of your dataframe.
+#' @param EastingCol - The name of your Easting UTM column. You may input the name
+#' with or without quotations, ie. EastingCol and "EastingCol" are both valid.
+#' @param NorthingCol - The name of your Northing UTM column. You may input the name
+#' with or without quotations, ie. NorthingCol and "NorthingCol" are both valid.
+#' @param ZoneCol - The column containing the UTM zone, with or without quotations.
+#' @param DatumCol - The column containing the datum for your UTM coordinates,
+#' with or without quotations.
+#' @param latlong_datum - The datum to use for lat/long coordinates. Defaults to NAD83.
+#'
+#' @return The function returns your dataframe, mutated with an additional two
+#' columns of decimalLongitude and decimalLatitude plus a column LatLong_CRS containing
+#' a PROJ string that specifies the coordinate reference system for these data.
+#' @export
+#'
+#' @examples
+#' \dontrun{
+#'
+#' my_dataframe %>%
+#' generate_ll_from_utm(
+#'   EastingCol = UTM_X,
+#'   NorthingCol = UTM_Y,
+#'   ZoneCol = Zone,
+#'   DatumCol = Datum
+#' )
+#'
+#' generate_ll_from_utm(
+#'   df = mydataframe,
+#'   EastingCol = "EastingCoords",
+#'   NorthingCol = "NorthingCoords",
+#'   ZoneCol = "zone",
+#'   DatumCol = "datum",
+#'   latlong_datum = "WGS84"
+#' )
+#' }
+generate_ll_from_utm <- function(df,
+                              EastingCol,
+                              NorthingCol,
+                              ZoneCol,
+                              DatumCol,
+                              latlong_datum = "NAD83") {
+
+  df <- dplyr::mutate(df, `_UTMJOINCOL` = seq_len(nrow(df))) %>%  # Add a temporary column for joining lat/long data back to orig. df. This is needed in case UTM data are missing and we need to remove those rows to do the conversion.
+    dplyr::ungroup()  # Ungroup df in case it comes in with unwanted groups.
+
+  # Separate df with just coordinates. We'll filter out any NA rows.
+  coord_df <- df %>%
+    dplyr::select(`_UTMJOINCOL`, {{EastingCol}}, {{NorthingCol}}, {{ZoneCol}}, {{DatumCol}})
+
+  withr::with_envvar(c("PROJ_LIB" = ""), {  # This is a fix for the proj library bug in R (see pinned post "sf::st_read() of geojson not getting CRS" in IMData General Discussion).
+    # filter out rows that are missing UTM, zone, or datum
+    coord_df <- coord_df %>%
+      dplyr::filter(!is.na({{EastingCol}}) &
+                      !is.na({{NorthingCol}}) &
+                      !is.na({{ZoneCol}}) &
+                      !is.na({{DatumCol}}))
+
+    na_row_count <- nrow(df) - nrow(coord_df)
+    if (na_row_count > 0) {
+      warning(paste(na_row_count, "rows are missing UTM coordinates, zone, and/or datum information."))
+    }
+
+    ## Set up CRS for lat/long data
+    latlong_CRS <- sp::CRS(glue::glue("+proj=longlat +datum={latlong_datum}"))  # CRS for our new lat/long values
+
+    # Loop through each datum and zone in the data
+    zones <- unique(dplyr::pull(coord_df, {{ZoneCol}}))  # Get vector of zones present in data
+    datums <- unique(dplyr::pull(coord_df, {{DatumCol}}))  # Get vector of datums present in data
+    new_coords <- tibble::tibble()
+    for (datum in datums) {
+      for (zone in zones) {
+        zone_num <- stringr::str_extract(zone, "\\d+")  # sp::CRS wants zone number only, e.g. 11, not 11N
+        # Figure out if zone is in N or S hemisphere. If unspecified, assume N. If S, add "+south" to proj string.
+        zone_letter <- tolower(stringr::str_extract(zone, "[A-Za-z]"))
+        if (!is.na(zone_letter) && zone_letter == "s") {
+          north_south <- " +south"
+        } else {
+          north_south <- ""
+        }
+        utm_CRS <- sp::CRS(glue::glue("+proj=utm +zone={zone_num} +datum={datum}{north_south}"))  # Set coordinate reference system for incoming UTM data
+        filtered_df <- coord_df %>%
+          dplyr::filter(!!rlang::ensym(ZoneCol) == zone, !!rlang::ensym(DatumCol) == datum)
+        sp_utm <- sp::SpatialPoints(filtered_df %>%
+                                      dplyr::select({{EastingCol}}, {{NorthingCol}}) %>%
+                                      as.matrix(),
+                                    proj4string = utm_CRS)  # Convert UTM columns into a SpatialPoints object
+        sp_geo <- sp::spTransform(sp_utm, latlong_CRS) %>%  # Transform UTM to Lat/Long
+          tibble::as_tibble()
+
+        # Set data$Long and data$Lat to newly converted values, but only for the zone and datum we are currently on in our for loop
+        filtered_df <- filtered_df %>% dplyr::mutate(decimalLatitude = sp_geo[[2]],
+                                                     decimalLongitude = sp_geo[[1]],
+                                                     LatLong_CRS = latlong_CRS@projargs)  # Store the coordinate reference system PROJ string in the dataframe
+        coord_df <- dplyr::left_join(coord_df, filtered_df, by = "_UTMJOINCOL")
+      }
+    }
+  })
+
+  df <- dplyr::left_join(df,
+                         dplyr::select(coord_df, decimalLatitude, decimalLongitude, LatLong_CRS, `_UTMJOINCOL`),
+                         by = "_UTMJOINCOL") %>%
+    dplyr::select(-`_UTMJOINCOL`)
+
+  return(df)
+}
+
+#' Coordinate Conversion from UTM to Latitude and Longitude
+#'
+#' @description
+#' `r lifecycle::badge("superseded")`
+#' `convert_utm_to_ll()` was superseded in favor of `generate_ll_from_utm()` to
+#' support and encourage including zone and datum columns in datasets. `generate_ll_from_utm()`
+#' also adds the ability to specify the coordinate reference system for lat/long coordinates,
+#' and accepts column names either quoted or unquoted for better compatibility with
+#' tidyverse piping.
+#' `convert_utm_to_ll()` takes your dataframe with UTM coordinates
 #' in separate Easting and Northing columns, and adds on an additional two
 #' columns with the converted decimalLatitude and decimalLongitude coordinates
 #' using the reference coordinate system WGS84. You may need to turn the VPN OFF
@@ -404,8 +539,8 @@ convert_utm_to_ll <- function(df,
   df <- cbind(Mid, lonlat)
   df <- plyr::rbind.fill(df, Mid2)
   df <- dplyr::rename(df,
-    EastingCol = "b", NorthingCol = "a",
-    "decimalLongitude" = x, "decimalLatitude" = y
+                      EastingCol = "b", NorthingCol = "a",
+                      "decimalLongitude" = x, "decimalLatitude" = y
   )
   return(df)
 }
diff --git a/R/utils.R b/R/utils.R
index 2ecc8c3..33660d8 100644
--- a/R/utils.R
+++ b/R/utils.R
@@ -35,4 +35,8 @@ globalVariables(c("any_of",
                   "y",
                   "capture.output",
                   "title",
-                  "% Accepted"))
\ No newline at end of file
+                  "% Accepted",
+                  "_UTMJOINCOL",
+                  "decimalLatitude",
+                  "decimalLongitude",
+                  "LatLong_CRS"))
\ No newline at end of file
diff --git a/Untitled/Untitled.Rmd b/Untitled/Untitled.Rmd
deleted file mode 100644
index c11ffeb..0000000
--- a/Untitled/Untitled.Rmd
+++ /dev/null
@@ -1,589 +0,0 @@
----
-output:
-  word_document: default
-  pdf_document: default
-bibliography: references.bib
-csl: national-park-service-DRR.csl
----
-
-```{=html}
-
-```
-```{r user_edited_parameterss, include=FALSE}
-# The title of your DRR. Should all DRR start with "Data Release Report:"? Should we enforce titles specifically referencing the data package(s) the Report is about?
-title <- "Sample DRR Title"
-
-# Optional and should only be included if publishing to the semi-official DRR series. Contact Joe if you are. If not, leave as NULL
-reportNumber <- ": get this number from Joe DeVivo"
-
-# This should match the Data Store Reference ID for this DRR. Eventually we should be able to pull this directly from the data package metadata.
-DRR_DSRefID <- 0000000
-
-#Author names and affiliations:
-
-#One way to think of the author information is that you are building a table:
-
-# Author | Affiliation | ORCID
-# Jane   | Institute 1 | 0000-1111-2222-3333
-# Jane   | Institute 2 | 0000-1111-2222-3333
-# John   | Institute 2 | NA
-
-#once the table is built, authors can be associated with the appropriate institute via relevant superscripts and the institutes can be listed only once in the DRR.
-
-# list the authors. If an author has multiple institutional affiliations, you must list the author multiple times. In this example, Jane Doe is listed twice because she has two affiliations.
-authorNames <- c(
-  "Jane Doe",
-  "Jane Doe",
-  "John Doe"
-)
-
-# List author affiliations. The order of author affiliations must match the order of the authors in AuthorNames. If an author has multiple affiliations, the author must be listed 2 (or more) times under authorNames (above) and each affiliation should be listed in order. If authors share the same affiliation, the affiliation should be listed once for each author. In this case, Managed Business Solutions (MBS) is listed twice because it is associated with two authors. MBS will only print to the DRR once.
-
-#Note that the entirety of each affiliation is enclosed in quotations. Do not worry about indentation or word wrapping.
-authorAffiliations <- c(
-  "NPS Inventory and Monitoring Division, 1201 Oakridge Dr., Suite 150, Fort Collins, Colorado",
-  
-  "Managed Business Solutions (MBS), a Sealaska Company, Contractor to the National Park Service, Natural Resource Stewardship and Science Directorate, 1201 Oakridge Dr., Suite 150, Fort Collins, Colorado",
-  
-  "Managed Business Solutions (MBS), a Sealaska Company, Contractor to the National Park Service, Natural Resource Stewardship and Science Directorate, 1201 Oakridge Dr., Suite 150, Fort Collins, Colorado"
-)
-
-# List the ORCID iDs for each author in the format "(xxxx-xxxx-xxxx-xxxx)". If an author does not have an ORCID iD, specify NA (no quotes). If an author is listed more than once (for instance because they have multiple institutional affiliations), the ORCID iD must also be listed more than once. For more information on ORCID iDs and to register an ORCID iD, see https://www.orcid.org. 
-
-# The order of the ORCID iDs must match the order of authors in AuthorNames.In this example, Jane Doe has an ORCID iD but John Doe does not. Jane's ORCID iD is listed twice because she her name is listed twice in authorNames(because she has two authorAffiliations).
-authorORCID <- c(
-  "(0000-1111-2222-3333)", "(0000-1111-2222-3333)", NA
-  )
-
-# Replace the text below with your abstract.
-DRRabstract <- "Abstract Should go here. Multiple Lines are okay; it'll format correctly. Pay careful attention to non-standard characters, line breaks (
), carriage returns, and curly-quotes. You may find it useful to write the abstract in NotePad++ or some other text editor and not a word processor (such as Microsoft Word).\n\n - -Note that if you need multiple paragraphs or line breaks you can generate them using a combination of backslashes and n's. \n\n - -The abstract should succinctly describe the study, the assay(s) performed, the resulting data, and their reuse potential, but should not make any claims regarding new scientific findings. No references are allowed in this section." - -# DataStore reference ID for the data package associated with this report. You must have at least one data package.Eventually, we will automate importing much of this information from metadata. -dataPackageRefID <- c(9999999) - -# Must match title in DataStore and metadata -dataPackageTitle <- "Data Package Title" - -# Must match descriptions in the data package metadata -dataPackageDescription <- "Short title for data package1" - -# generates your data package DOI based on the data package DataStore reference ID. This is different from the DRR DOI! No need to edit this. -dataPackageDOI <- paste0("https://doi.org/10.57830/", dataPackageRefID) - -# list the file names in your data package. Do NOT include metadata files. -dataPackage_fileNames <- c( - "my_data.csv", - "my_data2.csv" -) - -# list the approximate size of each data file. Make sure the order corresponds to the order of of the file names in dataPackage_fileNames -dataPackage_fileSizes <- c("0.8 MB", "10 GB") - -# list a short, one-line description of each data file. Descriptions must be in the same order as the filenames. -dataPackage_fileDescript <- c( - "This is a short description of my_data.csv (a good guideline is 10 words or less).", - "This is a short description of my_data2.csv.") -``` - -```{r setup_do_not_edit, include=FALSE} -Rpackages <- c("markdown", - "rmarkdown", - "pander", - "knitr", - "yaml", - "kableExtra", - "devtools", - "tidyverse", - "here") - -inst <- Rpackages %in% installed.packages() -if (length(Rpackages[!inst]) > 0) { - install.packages(Rpackages[!inst], dep = TRUE, repos = "https://cloud.r-project.org") -} -lapply(Rpackages, library, character.only = TRUE) - -devtools::install_github("EmilyMarkowitz-NOAA/NMFSReports") -library(NMFSReports) -devtools::install_github("nationalparkservice/QCkit") -library(QCkit) -``` - -*`r (paste0("https://doi.org/10.38750/", DRR_DSRefID))`* - -```{r title_do_not_edit, echo=FALSE, results="asis"} -date <- format(Sys.time(), "%d %B, %Y") -cat("#", title, "\n") -if (!is.null(reportNumber)) { - subtitle <- paste0("Data Release Report ", reportNumber) - cat("###", subtitle) -} -``` - -```{r authors_do_not_edit, echo=FALSE, results="asis"} -author_list <- data.frame(authorNames, authorAffiliations, authorORCID) -unique_authors <- author_list %>% distinct(authorNames, - .keep_all = TRUE) -unique_affiliation <- author_list %>% distinct(authorAffiliations, - .keep_all = TRUE) - -#single author documents: -if(length(seq_along(unique_authors$authorNames)) == 1){ - - for (i in seq_along(unique_authors$authorNames)) { - curr <- unique_authors[i, ] - - #find all author affiliations - aff <- author_list[which(authorNames == curr$authorNames),] - aff <- aff$authorAffiliations - - #identify order of affiliation(s) in a unique list of affiliations - #build the superscripts for author affiliations - super_script <- unique_affiliation$authorAffiliations %in% aff - super <- which(super_script == TRUE) - script <- super - - if(length(seq_along(super)) > 1){ - script <- NULL - j <- 1 - while(j < length(seq_along(super))){ - script <- append(script, paste0(super[j],",")) - j <- j+1 - } - if(j == length(seq_along(super))){ - script <- append(script, super[j]) - } - } - } - cat("#### ", curr$authorNames, sep="") - if (is.na(curr$authorORCID)) { - } - if (!is.na(curr$authorORCID)) { - orc <- paste0(" ", curr$authorORCID, "") - cat({{ orc }}) - } - cat(" ^",script,"^", " ", " ", sep="") - - #cat("#### ", unique_authors$authorNames, "^1^", sep="") - #if(!is.na(authorORCID)){ - # orc <- paste0(" https://orcid.org/", unique_authors$authorORCID) - # cat({{ orc }}, "\n") - #} - #cat("#### ", unique_authors$authorAffiliations, sep="") -} - -#multi author documents: -if(length(seq_along(unique_authors$authorNames)) > 1){ - for (i in seq_along(unique_authors$authorNames)) { - curr <- unique_authors[i, ] - - #find all author affiliations - aff <- author_list[which(authorNames == curr$authorNames),] - aff <- aff$authorAffiliations - - #identify order of affiliation(s) in a unique list of affiliations - #build the superscripts for author affiliations - super_script <- unique_affiliation$authorAffiliations %in% aff - super <- which(super_script == TRUE) - script <- super - - if(length(seq_along(super)) > 1){ - script <- NULL - j <- 1 - while(j < length(seq_along(super))){ - script <- append(script, paste0(super[j],",")) - j <- j+1 - } - if(j == length(seq_along(super))){ - script <- append(script, super[j]) - } - } - - # if NOT the second-to-last author: - if(i < (length(seq_along(unique_authors$authorNames)) - 1)){ - cat("#### ", curr$authorNames, " ", sep="") - if (is.na(curr$authorORCID)) { - } - if (!is.na(curr$authorORCID)) { - orc <- paste0(" ", curr$authorORCID, " ") - cat({{ orc }}) - } - cat( " ^", script, "^", ", ", " ", sep = "") - } - - # if IS the second-to-last author - if(i == (length(seq_along(unique_authors$authorNames)) - 1)){ - - #if 3 or more authors, include a comma before the "and": - if(length(seq_along(unique_authors$authorNames)) > 2){ - cat(curr$authorNames, sep="") - if (is.na(curr$authorORCID)) { - } - if (!is.na(curr$authorORCID)) { - orc <- paste0(" ", curr$authorORCID, " ") - cat({{ orc }}) - } - cat(" ^",script,"^", ", ", sep="") - cat("and ", sep="") - } - - #If only 2 authors, omit comma before "and": - if(length(seq_along(unique_authors$authorNames)) == 2){ - cat("#### ", curr$authorNames, sep="") - if (is.na(curr$authorORCID)) { - } - if (!is.na(curr$authorORCID)) { - orc <- paste0(" ", curr$authorORCID, " ") - cat({{ orc }}) - } - cat(" ^",script,"^ ", sep = "") - cat("and ", sep="") - } - } - - # if IS the Last author : - if(i == length(seq_along(unique_authors$authorNames))){ - cat(curr$authorNames, sep="") - if (is.na(curr$authorORCID)) { - } - if (!is.na(curr$authorORCID)) { - orc <- paste0(" ", curr$authorORCID, " ") - cat({{ orc }}) - } - cat(" ^", script, "^", sep = "") - } - } -} -cat("\n\n") -for(i in 1:nrow(unique_affiliation)){ - cat("^",i,"^ ", unique_affiliation[i,2], "\n\n", sep="") - } -``` - -# Abstract - -`r DRRabstract` - -
- -# Acknowledgements (optional) - -The Acknowledgements should contain text acknowledging non-author contributors. Acknowledgements should be brief, and should not include thanks to anonymous referees and editors or effusive comments. Grant or contribution numbers may be acknowledged. - -# Using citations in this document: - -To automate citations, add the citation to in bibtex format to the file "references.bib". You can manually copy and paste the bibtex for each reference in, or you can search for it from within Rstudio. From within Rstudio, make sure you are editing this document using the "Visual" view (as opposed to "Source"). From the "Insert" drop-down menu, select "\@ Citation..." (shortcut: Cntrl-Shift-F8). This will open a tool where you can view all the citations in your reference.bib file as well as search multiple databases for references, automatically insert the bibtex for the reference into your references.bib file (and customize the unique identifier if you'd like) and insert the in-text citation into the DRR template. - -Once a reference is in your references.bib file, from within this template you can simply type the '\@' symbol and select which reference to insert in the text. - -If you need to edit how the citation is displayed after inserting it into the text, switch back to the "Source" view. Each bibtex citation should start with a unique identifier; the example reference in the supplied references.bib file has the unique identifier "\@article{Scott1994,". Using the "Source" view in Rstudio, insert the reference in your text, by combining the "at" symbol with the portion of the unique identifier after the curly bracket: @Scott1994 . You can put a citation in parentheses using square brackets: [@Scott1994]. This will be rendered as (Scott, et al. 1994) in text. You can add multiple authors works a single parenthetical citation by separating them with a semi-colon. You can suppress the author and cite just the year by using a - symbol before the \@ : [-@Scott1994]. - -If you would like to format your citations manually, please feel free to do that instead. Make sure to examine the References section for examples of how to manually format each citation type. - -# Data Records (required) - -## Data Inputs (optional) - -If the data package being described was generated based on one or more pre-existing datasets, cite those datasets here. - -## Summary of Datasets Created (required) - -The Data Records section should be used to explain each data record associated with this work (for instance, a data package), including the DOI indicating where this information is stored, and provide an overview of the data files and their formats. Each external data record should be cited. Below is some sample text: - -This DRR describes the data package *`r dataPackageTitle`* which contains a metadata file and `r length(dataPackage_fileNames)` data files. These data were compiled and processed for dissemination by the National Park Service Inventory and Monitoring Division (IMD) and are available at `r dataPackageDOI` (see Table 1). - -```{r file_table, echo=FALSE} -filelist <- data.frame(dataPackage_fileNames, dataPackage_fileSizes, dataPackage_fileDescript) - -knitr::kable(filelist, caption = paste0("**Table 1. ", dataPackageTitle, ": List of data files.**"), col.names = c("**File Name**", "**Size**", "**Description**"), format = "pandoc") -``` - -See Appendix for additional notes and examples. - -# Data Quality Evaluation (required) - -The Data Quality Evaluation section should present any analyses that are needed to support the technical quality of the dataset. This section may be supported by figures and tables, as needed. *This is a required section*; authors must provide information to justify the reliability of their data. Wherever possible & appropriate, data quality evaluation should be presented in the context of data standards and quality control procedures as prescribed in the project's quality assurance planning documentation. - -**Required elements for this section** - -*Required Table* - -```{r data_acceptance_criteria, echo = FALSE, eval = TRUE} -# To turn off, set eval=FALSE. -# Generates a table of acceptance criteria for each of the data quality fields in your data package. Mitigations taken when data did not meet the acceptance criteria should be described textually in the Data Quality Evaluation section. - -# Specify which columns in your data package are data quality fields in the data_quality_fields variable. In the example below, data quality fields/columns in the data package are listed in the format [FieldName]_flag. These data quality fields relate to the respective temporal, taxonomic, and geographic data. - -data_quality_fields <- c( - "eventDate_flag", - "scientificName_flag", - "coordinate_flag" - ) - -# Brief description of the acceptance criteria for each respective data quality field. The order of the acceptance criteria must match the order of the data quality fields. - -data_quality_acceptance_criteria <- c( - "Sampling event date within the start and end dates of the project", - "Taxon exists within Integrated Taxonomic Information System and GBIF", - "Sampling location is within the park unit boundaries" - ) - -data_criteria <- data.frame(data_quality_fields = - str_remove(data_quality_fields, "_flag"), - data_quality_acceptance_criteria) - -data_criteria %>% - NMFSReports::format_cells(1:3, 1, "bold") %>% - knitr::kable(caption = "**Table 2. Acceptance criteria for data evaluated.**", - col.names=c("**Field**", - "**Acceptance Criteria**"), - format="pandoc", - align = 'c') - -``` - -```{r data_column_flagging, echo=FALSE, eval=TRUE} -# To turn off, set eval=FALSE. -# Generates a table summarizing QC at the column level within each file. All flagged columns are included. To add additional non-flagged columns, specify them with column names: cols=("my_unflagged_data1", "my_unflagged_data2)" or numbers: cols=c(1:4). All non-missing data in unflagged columns is assumed accepted. If a file has no flagged columns and no specified custom columns, all values for that data file will be listed as "NA". - -#set directory to the location of your data package: -dc_flags <- QCkit::get_custom_flags(here::here("Untitled", - "BICY_Example"), - output="columns") -dc_flags$`File Name` <- gsub(".csv", "", dc_flags$`File Name`) - - -colnames(dc_flags)[2]<-paste0("Measure", "^1^") -colnames(dc_flags)[4]<-paste0("A", "^2^") -colnames(dc_flags)[8]<-paste0("% Accepted", "^3^") - -file_names <- NULL -if (seq_along(unique(dc_flags$`File Name`)) < 2) { - file_names <- 1 - dc_flags <- dc_flags[,-1] -} - -#Generate the table: -dc_flags %>% - knitr::kable( - caption = '**Table 3: Summary of data quality flags for each column [A – Accepted; AE – Accepted but Estimated; P – Provisional; R – Rejected.]**', - format = "pandoc", - digits = 2, - align = 'c', - col.names = if (is.null(file_names)) { - c("**File Name**", "**Measure^1^**", "**Number of Records**", - "**A^2^**", "**AE**", "**R**", "**P**", "**% Accepted^3^**") - } else { - c( "**Measure^1^**", "**Number of Records**", "**A^2^**", "**AE**", - "**R**", "**P**", "**% Accepted^3^**") - }) %>% -kableExtra::add_footnote( - c("The '_flag' suffix has been omitted from column names for brevity.", - "All non-missing data in specified unflagged columns are considered accepted.", - "% Accepted is calculated as the number of accepted (where A and AE are both considered accepted) divided by the total number of observations (including any missing observations."), - notation = "number" - ) - -``` - -```{r data_package_flagging, echo=FALSE, eval=TRUE} -# To turn off, set eval=FALSE. -# Generates a table summarizing data quality across all flagged columns of each data file. To add additional non-flagged columns, specify them with column names: cols=("my_unflagged_data1", "my_unflagged_data2)" or numbers: cols=c(1:4). All non-missing data in unflagged columns is assumed accepted. If a file has no flagged columns and no specified custom columns, all values for that data file will be listed as "NA". - -#set directory to the location of your data package -dp_flags <- get_custom_flags(directory = here::here("Untitled", "BICY_Example"), output="files") - -#generate table: -dp_flags %>% - kableExtra::kbl(caption = '**Table 4: Summary of data quality flags for the data package [A – Accepted; AE – Accepted but Estimated; P – Provisional; R – Rejected.]**', - format = "pandoc", - col.names = c("**File Name**", "**A^1^**", "**AE**", "**R**", "**P**", "**% Accepted^2^**"), - digits=2, - align = 'c') %>% - kableExtra::add_footnote(c("All non-missing data in specified unflagged columns are considered accepted.", - "% Accepted is calculated as the number of accepted (where A and AE are both considered accepted) divided by the total number of observations plus the number of missing observations."), notation = "number") -``` - -Possible content **strongly Suggested to Include** - -- Occurrence rates or patterns in data that do not meet established standards or data quality objectives. - -Possible content **may include:** - -- experiments that support or validate the data-collection procedure(s) (e.g. negative controls, or an analysis of standards to confirm measurement linearity) -- statistical analyses of experimental error and variation -- general discussions of any procedures used to ensure reliable and unbiased data production, such as chain of custody procedures, blinding and randomization, sample tracking systems, etc. -- any other information needed for assessment of technical rigor by reviewers/users - -Generally, this **should not include:** - -- follow-up experiments aimed at testing or supporting an interpretation of the data -- statistical hypothesis testing (e.g. tests of statistical significance, identifying deferentially expressed genes, trend analysis, etc.) -- exploratory computational analyses like clustering and annotation enrichment (e.g. GO analysis). - -*Stock Text to include:* - -The data within the data records listed above have been reviewed by staff in the NPS Inventory and Monitoring Division to ensure accuracy, completeness, and consistency with documented data quality standards, as well as for usability and reproducibility (Table 3). Of the data that were evaluated for quality, XX.X% met data quality standards. The *`r dataPackageTitle`* is suitable for its intended use as of the date of processing (`r Sys.Date()`). - -# Usage Notes (required) - -The Usage Notes should contain brief instructions to assist other researchers with reuse of the data. This may include discussion of software packages (with appropriate citations) that are suitable for analysing the assay data files, suggested downstream processing steps (e.g. normalization, etc.), or tips for integrating or comparing the data records with other datasets. Authors are encouraged to provide code, programs or data-processing workflows if they may help others understand or use the data. - -For studies involving privacy or safety controls on public access to the data, this section should describe in detail these controls, including how authors can apply to access the data, what criteria will be used to determine who may access the data, and any limitations on data use. - -## Acquiring the Data Package - -This data package is available for download from the NPS DataStore at `r dataPackageDOI` and can be directly imported into R data frames using the NPSutils package - -# Methods - -Ideally these methods are identical to the methods listed in the metadata accompanying the data package that the DRR describes.Future versions of this template will pull directly from metadata. - -The Methods should cite previous methods under use but also be detailed enough describing data production including experimental design, data acquisition assays, and any computational processing (e.g. normalization, image feature extraction) such that others can understand the methods and processing steps without referring to associated publications. Cite and link to the DataStore reference for the protocol for detailed methods sufficient for reproducing the experiment or observational study. Related methods should be grouped under corresponding subheadings where possible, and methods should be described in enough detail to allow other researchers to interpret the full study. - -Specific data inputs and outputs should be explicitly cited in the text and included in the References section below, following the same [Chicago Manual of Style author-date format](https://www.chicagomanualofstyle.org/tools_citationguide/citation-guide-2.html) in text. See the [USGS data citation guidelines](https://www.usgs.gov/data-management/data-citation) for examples of how to cite data in text and in the References section. - -Authors are encouraged to consider creating a figure that outlines the experimental workflow(s) used to generate and analyse the data output(s) (Figure 1). - -```{r figure1, echo=FALSE, fig.cap="Example general workflow to include in the methods section."} -knitr::include_graphics(here::here("Untitled", - "BICY_Example", - "ProcessingWorkflow.png")) -``` - -## Data Collection and Sample Processing Methods (optional) - -Include a description of field methods and sample processing - -## Additional Data Sources (optional) - -Provide descriptions (with citations) of other data sources used. - -## Data Processing (required if done) - -Summarize process and results of any QC processes done that manipulate, change, or qualify data. - -## Code Availability (required) - -For all studies using custom code in the generation or processing of datasets, a statement must be included indicating whether and how the code can be accessed and any restrictions to access. This section should also include information on the versions of any software used, if relevant, and any specific variables or parameters used to generate, test, or process the current dataset. Actual analytical code should be provided in Appendices. - -# References (required) - -Provide sufficient information to locate the resource. If the citation has a DOI, include the DOI at the end of the citation, including the prefix. If you are citing documents that have unregistered DOIs (such as a data package that you are working on concurrently) still include the DOI. Electronic resources data and data services or web sites should include the date they were accessed. Keep the following line of code if you would like to automate generating and formatting references: - -::: {#refs} -::: - -If you would like to manually format your references, delete the preceding two lines and use the following examples instead: Include bibliographic information for any works cited (including the data package the DRR is describing) in the above sections, using the standard *NPS NR Publication Series* referencing style. - -See the following examples: - -## Agency, Company, etc. as Author Examples - -Fung Associates Inc. and SWCA Environmental Consultants. 2010. Assessment of natural resources and watershed conditions for Kalaupapa National Historical Park. Natural Resource Report. NPS/NPRC/WRD/NRR—2010/261. National Park Service, Fort Collins, Colorado. - -Greater Yellowstone Whitebark Pine Monitoring Working Group. 2014. Monitoring whitebark pine in the Greater Yellowstone Ecosystem: 2013 annual report. Natural Resource Data Series. NPS/GRYN/NRDS—2014/631. National Park Service. Fort Collins, Colorado. - -National Park Service (NPS). 2016. State of the park report for Zion National Park. State of the Park Reports. No. 23. National Park Service. Washington, District of Columbia. - -U.S. Forest Service (USFS). 1993. ECOMAP. National hierarchical framework of ecological units. U. S. Forest Service, Washington, D.C. - -## Traditional Journal Article Examples - -Bradbury, J. W., S. L. Vehrencamp, K. E. Clifton, and L. M. Clifton. 1996. The relationship between bite rate and local forage abundance in wild Thompson’s gazelles. Ecology 77:2237–2255. - -Oakley, K. L., L. P. Thomas, and S. G. Fancy. 2003. Guidelines for long-term monitoring protocols. Wildlife Society Bulletin 31(4):1000–1003. - -Sawaya, M. A., T. K. Ruth, S. Creel, J. J. Rotella, J. B. Stetz, H. B. Quigley, and S. T. Kalinowski. 2011. Evaluation of noninvasive genetic sampling methods for cougars in Yellowstone National Park. The Journal of Wildlife Management 75(3):612–622. - -## Book Example - -Harvill, A. M., Jr., T. R. Bradley, C. E. Stevens, T. F. Wieboldt, D. M. E. Ware, D. W. Ogle, and G. W. Ramsey. 1992. Atlas of the Virginia flora, third edition. Virginia Botanical Associates, Farmville, Virginia. - -## Book Chapter Examples - -McCauly, E. 1984. The estimation of abundance and biomass of zooplankton in samples. Pages 228–265 in J. A. Dowling and F. H. Rigler, editors. A manual on methods for the assessment of secondary productivity in fresh waters. Blackwell Scientific, Oxford, UK. - -Watson, P. J. 2004. Of caves and shell mounds in west-central Kentucky. Pages 159–164 in Of caves and shell mounds. The University of Alabama Press, Tuscaloosa, Alabama. - -## Published Report Examples - -Bass, S., R. E. Gallipeau, Jr., M. Van Stappen, J. Kumer, M. Wessner, S. Petersburg, L. L. Hays, J. Milstone, M. Soukup, M. Fletcher, L. G. Adams, and others. 1988. Highlights of natural resource management 1987. National Park Service, Denver, Colorado. - -Holthausen, R. S., M. G. Raphael, K. S. McKelvey, E. D. Forsman, E. E. Starkey, and D. E. Seaman. 1994. The contribution of federal and nonfederal habitats to the persistence of the northern spotted owl on the Olympic Peninsula, Washington. General Technical Report PNW–GTR–352. U.S. Forest Service, Corvallis, Oregon. - -Jackson, L. L., and L. P. Gough. 1991. Seasonal and spatial biogeochemical trends for chaparral vegetation and soil geochemistry in the Santa Monica Mountains National Recreation Area. U.S. Geological Survey, Denver. Open File Report 91–0005. - -## Unpublished Report Examples - -Conant, B., and J. I. Hodges. 1995. Western brant population estimates. U.S. Fish and Wildlife Service Unpublished Report, Juneau, Alaska. - -Conant, B., and J. F. Voelzer. 2001. Winter waterfowl survey: Mexico west coast and Baja California. U.S. Fish and Wildlife Service Unpublished Report, Juneau, Alaska. - -## Thesis/Dissertation Examples - -Diong, C. H. 1982. Population and biology of the feral pig (Sus scrofa L) in Kipahulu Valley, Mau’i. Dissertation. University of Hawai’i, Honolulu, Hawai’i. - -McTigue, K. M. 1992. Nutrient pulses and herbivory: Integrative control of primary producers in lakes. Thesis. University of Wisconsin, Madison, Wisconsin. - -## Conference Proceedings Examples - -Gunther, K. A. 1994. Changing problems in bear management: Yellowstone National Park twenty-plus years after the dumps. Ninth International Conference on Bear Research and Management. Missoula, MT, International Association for Bear Research and Management, Bozeman, Montana, February 1992:549–560. - -Webb, J. R., and J. N. Galloway. 1991. Potential acidification of streams in Mid-Appalachian Highlands: A problem with generalized assessments. Southern Appalachian Man and Biosphere Conference. Gatlinburg, Tennessee. - -## General Internet Examples - -Colorado Native Plant Society. 2016. Colorado Native Plant Society website. Available at: (accessed 07 March 2016). - -National Park Service (NPS). 2016a. IRMA Portal (Integrated Resource Management Applications) website. Available at: (accessed 07 March 2016). - -National Park Service (NPS). 2016b. Natural Resource Publications Management website. Available at: (accessed 07 March 2016). - -United Sates Fish and Wildlife Service (USFWS). 2016. Endangered Species website. Available at: (accessed 07 March 2016). - -## Online Data Warehouse Sites (sites that allow you see and download data from multiple sources) - -National Oceanographic and Atmospheric Association (NOAA). 2016. NOAA National Climatic Data Center website. Available at: (accessed 07 March 2016). - -Environmental Protection Agency (EPA). 2016. Storage and Retrieval Data Warehouse website (STORET). Available at: (accessed 07 March 2016). - -National Park Service (NPS). 2016c. NPScape Landscape Dynamics Metric Viewer website. Available at: (accessed 07 March 2016). - -National Park Service (NPS). 2016d. NPSpecies online application. Available at: (accessed 07 March 2016). - -United States Geologic Survey (USGS). 2016. BioData - Aquatic Bioassessment Data for the Nation. Available at: (accessed 07 March 2016). - -# Appendix A. Code Listing - -In most cases, Code listing is not required. If all QA/QC and data manipulations were performed elsewhere, you should cite that code in the methods (and leave the "Listing" code chunk as the default settings: eval=FALSE and echo=FALSE). If you have developed custom scripts, you can add those to DataStore with the reference type "Script" and cite them in the DRR. Some people have developed code to perform QA/QC or data manipulation within the DRR itself. In that case, you must set the "Listing" code chunk to eval=TRUE and echo=TRUE to fully document the QA/QC process. - -```{r listing, ref.label=knitr::all_labels(), echo=TRUE, eval=TRUE} - -``` - -\pagebreak - -# Appendix B. Session and Version Information - -In most cases you do not need to report session info (leave the "session-info" code chunk parameters in their default state: eval=FALSE). Session and version information is only necessary if you have set the "Listing" code chunk to eval=TRUE in appendix A. In that case, change the "session-info" code chunk parameters to eval=TRUE. - -```{r session_info, eval=TRUE, echo=FALSE, cache=FALSE} -sessionInfo() -Sys.time() -``` diff --git a/docs/404.html b/docs/404.html index 58a25f5..c1a016f 100644 --- a/docs/404.html +++ b/docs/404.html @@ -101,7 +101,7 @@

Page not found (404)